EmergencyEMERGENCY? Get 24/7 Help Now!

10+ Ways to Crash or Overload MySQL

 | November 13, 2007 |  Posted In: Insight for DBAs


People are sometimes contacting me and asking about bugs like this which provide a trivial way to crash MySQL to the user with basic privileges and asking me what to do.

My answer to them is – there is nothing new to it and they just sit should back and relax 🙂

Really – there are many ways to crash or otherwise made unavailable server with any MySQL version if you have access to it with normal privileges. We’re constantly helping people to fix mistakes in the applications which make MySQL Server useless (though few of them cause crashes to be honest) so obviously it is even easier if you have intent.

In my opinion MySQL Security should be treated the following way – if you do not allow any access to MySQL Server you are reasonably secure. There were few attacks which did not require valid MySQL account and they have been normally treated very quickly by MySQL. The moment you give someone access to MySQL Server, all MySQL Security guarantees is you can stop one from seeing data you do not want one to see (or change data) but it does ensure you can stop one from overtaking the server.

This will not really change until MySQL Server implements Global Resource Management as otherwise you can’t really control how much resources user can takes.

You would say resource hog is not a crash ? Indeed but it still can make server unavailable plus in many cases you can use result overload to make MySQL to consume so much memory so it will swap badly until it runs out of space and gets killed. In 32bit systems it was even easier as all you had to do is to get enough memory allocated to get to address space limit and MySQL will crash when one of internal memory allocations would unexpectedly fail.

To give you couple of hints:

Temporary Tables You can build query (with derived tables) which uses as many temporary tables as you like and you can size them so they would be still be created in memory.

Memory Tables If you can create memory tables you can create any number of them and even though there is max_heap_table_size to restrict size of each table total size is unrestricted. Note you can create tables as TEMPORARY so they would not be easily visible on file system.

MyISAM Sort Buffer – This one is typically set large as it is assumed only couple of tables would be repaired at the same time. What if user uses all 100 of his allowed connections to ALTER 100 different tables ? This can be offset by keeping myisam_sort_buffer_size low, but then performance would suffer.

Prepared Statements Number – Happily now there is a limit on total number of prepared statements (max_stmt_count) which can be created per server, so it is better than it was before when application which forgot to close prepared statements could easily make server to take up all memory. However there is no per user limit so one user can consume all prepared breaking other applications which need prepared statements. Moreover not all prepared statements consume same amount of memory and by preparing complex prepared statements you can eat a lot of memory. The workaround for this issue is to avoid use of prepared statements and keep max_prepared_stmt_count very low.

Prepared Statements and Blob Data If you’re want to get memory consumed by single prepared statement you can create statement with thousands of placeholders and send data for each of them using mysql_stmt_send_long_data call – Server buffers such data until you have executed prepared statement.

Innodb Table Cache Leak – Innodb never shrinks its internal table cache (data dictionary) so by creating and accessing large amount of Innodb tables you can allocate large amount of memory on server. The size is typically 4-8K per table though complex tables can require larger sizes, so this is mainly the problem for smaller servers.

Table Cache Merge Tables – Table Cache is allocated in entries and it is normally assumed each entry will use no more than couple of file descriptors. This is not the case with Merge tables for example – creating and accessing few merge tables with 1000 of subtables will likely cause your MySQL server to run out of file descriptors. The same is true for Partitioned tables in MySQL 5.1

Disk Space For MyISAM tables hosting providers used to use disk quotas for MyISAM tables. You can also use similar technique with innodb_file_per_table. However you can’t control growth of Innodb system tablespace which is used to hold undo data and which you can grow over roof by opening transaction and doing a lot of updates, or simply keeping transaction open and allowing other users to do updates – Innodb only can purge data after oldest transactions needing snapshot commits. You can kind of workaround this issue by killing transactions which are too old though proper solution would be implementing some form of limit on undo segment size. Another possibility is to use queries which use large temporary tables or sort files which can take up all space and even if they can be placed on separate partition filling it up will cause other users being unable to run their queries.

Stored Procedures – How much memory can stored procedure allocate ? say can you create 1000 variables in stored procedure and set 1M result set to each of them ? I have not experimented with other stored procedure language constructions but I do not think tight memory allocation policy is enforced.

Stored Procedures Cursors – Cursors inside of stored procedures are implemented as temporary tables, so by opening large amount of cursors which are handled as in-memory temporary tables you can consume unbound amount of memory.

Stored Procedures Recursion – It does not have to be recursion per say – just different stored procedures calling each other. Calls require memory allocation and especially stack memory allocations. There are some protections to ensure you would not run out of stack but they might not cover all cases.

Server Side Variables – Each server side can hold value up to max_allowed_packet in size (1M by default) but there does not seems to be any limit for server side variables one can create.

Parse Tree The query is internally presented using parse tree inside MySQL which of course depends on query size which is controlled by max_allowed_packet. However some MySQL optimizations such as equity propagation and range expansion can cause Parse Tree to blow up in size. For most trivial case it was fixed it was fixed though I’m not sure if all possibilities were validated.

Session variables There is no restriction on how large you can set per connection variables for unprivileged user which allows to run queries with uncontrolled resource usage.

Host Block You can have given client host blocked from accessing the server by simulating number of failed connections. This can be avoided by having high max_connect_errors variables but this will obviously disable password brute force protection.

Mutex Saturation Both Innodb and MyISAM have hotspots and having few connections which use appropriate operations heavily you can reduce system performance beyond being functional.

General Overload As MySQL does not have much of resource utilization control you can simply run heavy queries to get MySQL Server barely functional. The limits which exist are not really helpful as they do not define query complexity and resource consumption allowed for user. Heavy Disk IO queries can be one of the worse because they would both overload IO system and wipe off both MySQL and OS caches which can cause other users queries to perform order of magnitude slower than in normal case.

Some of these comes from real experiences others are just my guesses of what could break things.

As you can see from these points it does not looks like MySQL tries to make server bullet proof if somebody tries to break it intensionally – most of the limits, such as max_heap_table_size or max_prepared_stmt_count are designed to protect from typical application mistakes not from someone intentionally trying to bring MySQL Server down.

Note: I explored only some of server side objects – generally one would need to ensure for each and every object there is some global quotas so it can’t consume too much memory and also you can’t get all of it consumed by the single user/single connection.

P.S You would say how this all could be true if there are thousands of virtual hosting companies offering MySQL access. Sure they do and many of them are lucky having users using MySQL lightly and not trying to crash/overtake it. Others constantly have to detect and restrict abusers. This is not to mention many Virtual Hosting companies use old MySQL versions which typically have more issues.

Nothing I write here is security hole which is unknown to MySQL Team. What I’m trying to do is pretty much explain the issue so there is no misunderstandings on how secure is MySQL.

Peter Zaitsev

Peter managed the High Performance Group within MySQL until 2006, when he founded Percona. Peter has a Master's Degree in Computer Science and is an expert in database kernels, computer hardware, and application scaling.


  • I believe I am encountering the Innodb Table Cache Leak problem. Is it possible to release this memory in some way? Will FLUSH TABLES release this memory?

  • Hi Peter,

    great post, thanks!

    I was wondering though what is meant by:

    Server Side Variables
    Session variables

    I think by “Server Side Variables” you mean “User-defined variables” like

    SELECT @myvar:=’myvalue’;

    and by “Session variables” you mean a user can do

    SET @@session.some_resource_limiting_var := extremely_high_number


  • […] 10+ Ways to Crash or Overload MySQL – в то время, как все стараются повысить быстродействие и стабильность работы своего сервера баз данных, цель этой статьи – показать вам десяток кратчайших путей к тому, чтобы обрушить вашу базу (иногда вместе с данными). Послушайте авторов и постарайтесь сделать наоборот! […]

  • I wonder, if MySQL proxy (http://forge.mysql.com/wiki/MySQL_Proxy) could be used to “filter” bad queries. We have huge database mainly used by freehost users – and yesterday – someone have tried (and successfuly managed) to exploit bug #32125 just like it was written in its bug report. This cost us in 1.5 hour of mysql unavailability (exploit was run 3 times, after that mysql starts up in 30 minutes – we have realy HUGE mysql instance). I wonder if full disclosure of critical bugs should be even post to public on bugs.mysql.com before bug will be fixed and patch public available.

  • Piotr,

    You can catch babies this way but normally it should bot be hard to modify the query so it is different enough so you do not catch it but still causes bug report. Not to mention queries alone may not be the issues – it could be the set of schema configuration data and query which crashes.

    Regarding bugs – the true security (not just crash) bugs are handled such a way – they should be sent to security@mysql.com instead of posting them publicly and even if somebody posts them public they may be hidden.

  • Hi Peter,

    Pretty good list.

    Re. “Host block”, max_connect_errors does not come into play with failed password attempts. As far as I know, MySQL does not have protection against brute force password attempts. The max_connect_errors setting is only taken into account for failed handshakes and network problems.

    Check the places that inc_host_errors, there aren’t many (all in sql_parse.cc) and it’s always called as:




  • Basically there are two types of “crushing the system”. The first type is something clearly stupid
    or malicious – like creating 10000 variables, loading gigabyte BLOBs etc. These problems exist almost
    everywhere, and nobody in his right mind would try to implement them. The other type is like doing
    something that looks harmless and results in crushed system. The second type is more important.

  • Jeremy, This is very strange. I just checked on the system:
    [pz@mail pz]$ mysqladmin extended | grep conn
    | Aborted_connects | 0 |
    | Max_used_connections | 0 |
    | Threads_connected | 1 |
    [pz@mail pz]$ mysql -u sdf -p df
    Enter password:
    ERROR 1045 (28000): Access denied for user ‘sdf’@’localhost’ (using password: YES)
    [pz@mail pz]$
    [pz@mail pz]$ mysql -u sdf -pdf
    ERROR 1045 (28000): Access denied for user ‘sdf’@’localhost’ (using password: YES)
    [pz@mail pz]$ mysqladmin extended | grep conn
    | Aborted_connects | 2 |
    | Max_used_connections | 0 |
    | Threads_connected | 1 |

    So entered password is counted as aborted connect. If those do not count towards max_connect_errors this is just strange.

  • Hi Peter,

    Yes, it’s strange. Aborted_connects has no relation to max_connect_errors. The only place it’s ever checked is in sql/sql_parse.cc:

    848 if (!(specialflag & SPECIAL_NO_RESOLVE))
    849 {

    861 if (connect_errors > max_connect_errors)
    862 return(ER_HOST_IS_BLOCKED);
    863 }

    Where connect_errors comes from:

    851 thd->main_security_ctx.host=
    852 ip_to_hostname(&thd->remote.sin_addr, &connect_errors);

    There are a couple of things to consider with this:
    * Since inc_host_errors is never called on password failure, bad password attempts don’t count.
    * If you use skip_name_resolve, none of that code is called at all.
    * If a host doesn’t have a valid PTR (ip to host mapping), nothing is ever counted.

    As you probably know, I’ve been trying to get MySQL to overhaul how it handles hosts for a long time now. My host cache patches are a good start to this:




  • Actually, to add one further bullet point:

    * localhost doesn’t count either, as it’s not really looked up and thus doesn’t have an entry to hostname_cache in order to keep count of errors.

  • Given this statement: “Really – there are many ways to crash or otherwise made unavailable server with any MySQL version if you have access to it with normal privileges”, shouldn’t the title be something like “10+ Ways to keep alive MySQL”?

    Not to mention, this blog entry would be even shorter than it actually is right now.

  • Peter,
    I’m trying to send longtext data of 1GB using prepared statement like
    Insert into test(long_text) values (?);

    while (chunck_cnt<535000)
    mysql_stmt_send_long_data(mysql, 0, chunk, 2000);


    The above code never finishes execution.
    After debugging and counting the time of execution of mysql_stmt_send_long_data(), I found that this function initially takes 0 microseconds but it grows upto 375000 microsecs for some chunks.

    Can you provide any help in this case?


  • Hello Peter,
    Thanks for the post!!!
    My application if often facing the issue with crashing of the tables. I do use the repair table ‘table_name ‘ command every time.
    And please help me to understand in following scenarios

    1) Should I migrate the mysql to any other sql servers or no need to migrate mysql to other?
    2) The crashing of the tables leads to big problem in future. Because I need to take decision quickly as my data is growing day by day.


Leave a Reply