Why MySQL Could Be Slow With Large Tables

If you’ve been reading enough database-related forums, mailing lists, or blogs you have probably heard complains about MySQL being unable to handle more than 1,000,000 (or select any other number) rows by some of the users. On the other hand, it is well known with customers like Google, Yahoo, LiveJournal, and Technorati, MySQL has installations with many billions of rows and delivers great performance. What could be the reason?

The reason is normally table design and understanding the inner works of MySQL. If you design your data wisely, considering what MySQL can do and what it can’t, you will get great performance. And if not, you might become upset and become one of those bloggers. Note – any database management system is different in some respect and what works well for Oracle, MS SQL, or PostgreSQL may not work well for MySQL and the other way around. Even storage engines have very important differences which can affect performance dramatically.

The three main issues you should be concerned if you’re dealing with very large data sets are Buffers, Indexes, and Joins.


First thing you need to take into account is fact; a situation when data fits in memory and when it does not are very different. If you started from in-memory data size and expect gradual performance decrease as the database size grows, you may be surprised by a severe drop in performance. This especially applies to index lookups and joins which we cover later. As everything usually slows down a lot once it does not fit in memory, the good solution is to make sure your data fits in memory as well as possible. This could be done by data partitioning (i.e. old and rarely accessed data stored in different servers), multi-server partitioning to use combined memory, and a lot of other techniques which I should cover at some later time.

So you understand how much having data in memory changes things, here is a small example with numbers. If you have your data fully in memory you could perform over 300,000 random lookups per second from a single thread, depending on system and table structure. Now if your data is fully on disk (both data and index) you would need 2+ IOs to retrieve the row – which means you get about 100 rows/sec. Note: multiple drives do not really help a lot as we’re speaking about single thread/query here. So the difference is 3,000x! It might be a bit too much as there are few completely uncached workloads, but 100+ times difference is quite frequent.


What everyone knows about indexes is the fact that they are good to speed up access to the database. Some people would also remember if indexes are helpful or not depends on index selectivity – how large the proportion of rows match to a particular index value or range. What is often forgotten about is,  depending on if the workload is cached or not,  different selectivity might show benefit from using indexes. In fact, even MySQL optimizer currently does not take it into account. For in-memory workload indexes, access might be faster even if 50% of rows are accessed, while for disk IO bound access we might be better off doing a full table scan even if only a few percent or rows are accessed.

Let’s do some computations again. Consider a table which has 100-byte rows. With decent SCSI drives, we can get 100MB/sec read speed which gives us about 1,000,000 rows per second for fully sequential access, with jam-packed rows – quite possibly a scenario for MyISAM tables. Now if we take the same hard drive for a fully IO-bound workload, it will be able to provide just 100 row lookups by index per second. The difference is 10,000 times for our worst-case scenario. It might be not that bad in practice, but again, it is not hard to reach 100 times difference.

Here is a little illustration I’ve created of the table with over 30 millions of rows. “val” column in this table has 10000 distinct value, so range 1..100 selects about 1% of the table. The times for full table scan vs range scan by index:

Also, remember – not all indexes are created equal. Some indexes may be placed in a sorted way or pages placed in random places – this may affect index scan/range scan speed dramatically. The rows referenced by indexes also could be located sequentially or require random IO if index ranges are scanned. There are also clustered keys in Innodb which combine index access with data access, saving you IO for completely disk-bound workloads.

There are certain optimizations in the works which would improve the performance of index accesses/index scans. For example, retrieving index values first and then accessing rows in sorted order can be a lot of help for big scans. This will reduce the gap, but I doubt it will be closed.


Joins are used to compose the complex object which was previously normalized to several tables, or perform complex queries finding relationships between objects. Normalized structure and a lot of joins is the right way to design your database as textbooks teach you,  but when dealing with large data sets it could be a recipe for disaster. The problem is not the data size; normalized data normally becomes smaller, but a dramatically increased number of index lookups could be random accesses. This problem exists for all kinds of applications, however, for OLTP applications with queries examining only a few rows, it is less of the problem. Data retrieval, search, DSS, business intelligence applications which need to analyze a lot of rows run aggregates, etc., is when this problem is the most dramatic.

Some joins are also better than others. For example, if you have a star join with dimension tables being small, it would not slow things down too much. On the other hand, a join of a few large tables, which is completely disk-bound, can be very slow.

One of the reasons elevating this problem in MySQL is a lack of advanced join methods at this point (the work is on a way) – MySQL can’t do hash join or sort-merge join – it only can do nested loops method, which requires a lot of index lookups which may be random.

Here is a good example. As we saw my 30mil rows (12GB) table was scanned in less than 5 minutes. Now if we would do eq join of the table to other 30mil rows table, it will be completely random. We’ll need to perform 30 million random row reads, which gives us 300,000 seconds with 100 rows/sec rate. So we would go from 5 minutes to almost 4 days if we need to do the join. Some people assume join would be close to two full table scans (as 60mil of rows need to be read) – but this is way wrong.

Do not take me as going against normalization or joins. It is a great principle and should be used when possible. Just do not forget about the performance implications designed into the system and do not expect joins to be free.

Finally I should mention one more MySQL limitation which requires you to be extra careful working with large data sets. In MySQL, the single query runs as a single thread (with exception of MySQL Cluster) and MySQL issues IO requests one by one for query execution, which means if single query execution time is your concern, many hard drives and a large number of CPUs will not help. Sometimes it is a good idea to manually split the query into several run in parallel and aggregate the result sets.

So if you’re dealing with large data sets and complex queries here are few tips.

Try to fit data set you’re working with in memory – Processing in memory is so much faster and you have a whole bunch of problems solved just doing so. Use multiple servers to host portions of the data set. Store a portion of data you’re going to work with in temporary tables etc.

Prefer full table scans to index accesses – For large data sets, full table scans are often faster than range scans and other types of index lookups. Even if you look at 1% fr rows or less, a full table scan may be faster.

Avoid joins to large tables Joining of large data sets using nested loops is very expensive. Try to avoid it. Joins to smaller tables is OK but you might want to preload them to memory before join so there is no random IO needed to populate the caches.

With proper application architecture and table design, you can build applications operating with very large data sets based on MySQL.

More free resources that you might find useful


Blog Posts

White Papers & eBooks

Learn more about Percona Server for MySQL

Share this post

Comments (152)

  • L

    Great article, gave me some good pointers.

    Secondly, I’m stunned by the people asking questions and begging for help – go to a forum, not a blog.

    Thanks again

    June 9, 2006 at 12:00 am
  • Ian Holsman

    Good post Peter..

    just a couple of questions to clarify somethings.

    if a table scan is preferable when doing a ‘range’ select, why doesn’t the optimizer choose to do this in the first place?

    is there some sort of rule of thumb here.. use a index when you expect your queries to only return X% of data back? otherwise put a hint in your SQL to force a table scan ?

    what changes are in 5.1 which change how the optimzer parses queries..

    does running optimize table regularly help in these situtations? or just when you have a large change in your data distribution in your table?

    oh.. one tip for your readers.. always run ‘explain’ on a fully loaded database to make sure your indexes are being used

    June 9, 2006 at 5:14 pm
  • peter


    Right. In theory optimizer should know and select it automatically. In fact it is not smart enough. It also simply does not have the data available – is given index (range) currently in memory or will it need to read it from the disk ? How much index is fragmented ? How random accesses would be to retrieve the rows. Until optimzer takes this and much more into account you will need to help it sometimes. I did not mentioned it in the article but there is IGNORE INDEX() hint to force full table scan.

    There is no rule of thumb. As you could see in the article in the test I’ve created range covering 1% of table was 6 times slower than full table scan… which means at about 0.2% table scan is preferable. In other cases especially for cached workload it can be as much as 30-50%.

    In MySQL 5.1 there are tons of little changes. What change you’re speaking about ?

    OPTIMIZE helps for certain problems – ie it sorts indexes themselves and removers row fragmentation (all for MYISAM tables). It however can’t make row retrieval which is done by index sequential one. One more hint if you have all your ranges by specific key ALTER TABLE … ORDER BY key would help a lot.

    Yes. Runing explain is good idea. Also do not forget to try it out for different constants – plans are not always the same.

    June 9, 2006 at 11:59 pm
  • Vadim

    as I wrote in http://www.mysqlperformanceblog.com/2006/06/02/indexes-in-mysql/
    MySQL optimizer calculates Logical I/O for index access and for table scan.
    One big mistake here, I think, MySQL makes assumption 100 key comparison
    like ” if (searched_key == current_key)” is equal to 1 Logical I/O.
    But I believe on modern boxes constant 100 should be much bigger.

    June 10, 2006 at 2:41 am
  • Alexey

    For most workloads you’ll always want to provide enough memory to key cache so its hit ratio is like 99.9%. Even if a table scan looks faster than index access on a cold-cache benchmark, it doesn’t mean that it’s a good idea to use table scans. It can easily hurt overall system performance – by trashing OS disk cache, and if we compare table scan on data cached by OS and index scan on keys cached by MySQL, table scan uses more CPU (because of syscall overhead and possible context switches due to syscalls).
    Just an opinion. 🙂

    June 10, 2006 at 7:07 am
  • peter


    You probably missunderstood this article. I’m writing about working with large data sets, these are then your tables and your working set do not fit in memory. This is the case then full table scan will actually require less IO than using indexes. And this is when you can’t get 99.99% keycache hit rate. Not to mention keycache rate is only part of the problem – you also need to read rows which might be much larger and so not so well cached.

    As you probably seen from the article my first advice is to try to get your data to fit in cache. If it is possible you instantly will have half of the problems solved. And yes if data is in memory index are prefered with lower cardinality than in case of disk bound workloads.

    June 14, 2006 at 7:41 am
  • Aaron

    We’ve got 20,000,000 bank loan records we query against all sorts of tables. Everything is real real slow. We’re using LAMP. I could send the table structures and queries/ php cocde that tends to bog down.

    June 16, 2006 at 3:44 pm
    • azmi

      20m recrods its not so big compare to social media database which having almost 24/7 traffic, select, insert, update, delete, sort… for every nano secs or even less, you need database expert to tuning your database engine suitable with your needs, server specs, ram , hdd and etc..

      January 4, 2017 at 8:00 am
  • peter

    Hi Aaron,

    Please feel free to send it to me to pz at mysql performance blog.com. Just do not forget EXPLAIN for your queries and if you have php to load it up with some “random” data which is silimar to yours that would be great.

    We should take a look at your queries to see what could be done.

    June 17, 2006 at 6:14 am
  • Paul

    I implemented a simple logging of all my web sites access to make some statistics (sites access per day, ip address, search engine source, search queries, user text entries, …) but most of my queries went way too slow to be of any use last year. The problem started when I got to around 600,000 rows (table size: 290MB).

    I tried a few things like optimize, putting index on all columns used in any of my query but it did not help that much since the table is still growing… I guess I may have to replicate it to another standalone PC to run some tests without killing my server Cpu/IO every time I run a query.

    I am not using any join, I will try the ‘explain’ and the ‘IGNORE INDEX()’ when I have a chance although I don’t think it will help since I added indexes after I saw the problem.

    The index does make it very fast for one of my table on another project (list of all cities in the world: 3 million rows). I guess it’s all about memory vs hard disk access.

    Now my question is for a current project that I am developing. You can think of it as a webmail service like google mail, yahoo or hotmail.

    Right now I am wondering if it would be faster to have one table per user for messages instead of one big table with all the messages and two indexes (sender id, recipient id). This could mean millions of table so it is not easy to test.

    I will probably write a random users/messages generator to create a million user with a thousand message each to test it but you may have already some information on this so it may save me a few days of guess work.

    Any suggestions would be appreciated.

    June 19, 2006 at 1:46 pm
  • peter


    300MB table is tiny. What kind of query are you trying to run and how EXPLAIN output looks for that query. Also what is your MySQL Version ?

    There are some other tricks which you need to consider – for example if you do GROUP BY and number of resulting rows is large you might get pretty poor speed because temporary table is used and it grows large. Using SQL_BIG_RESULT helps to make it use sort instead.

    Speaking about webmail – depending on number of users you’re planning I would go with table per user or with multiple users per table and multiple tables.

    Besides having your tables more managable you would get your data clustered by message owner, which will speed up opertions a lot. Also this means once user logs in and views messages they will be cached in OS cache or MySQL buffers speeding up further work dramatically.

    June 19, 2006 at 2:06 pm
  • Apachez

    8. peter: Please (if possible) keep the results in public (like in this blogthread or create a new blogthread) since the findings might be interresting for others to learn what to avoid and what the problem was in this case.

    11. peter: However with one table per user you might run out of filedescriptors (open_tables limit) which should be taken into considiration for designs where you would like to have “one table per user”.

    June 20, 2006 at 8:26 am
  • peter

    Apachez – yes this is the point.

    You get free answers to your questions by asking them in this blog (or at MySQL Forums) but other people can benefit from the answers as well.

    Speaking about “table per user” – it does not mean you will run out of file descriptors. “table_cache” is what defines how many tables will be opened and you can configure it independently of number of tables you’re using. You however want to keep value hight in such configuration to avoid constant table reopens. With Innodb tables you also have all tables kept open permanently which can waste a lot of memory but it is other problem. Speaking about “open_file_limit” which limits number of files MySQL can use at the same time – on modern operation systems it is safe to set it to rather high values. I used MySQL with other 100.000 of files opened at the same time with no problems. Shutdown can be long in such case though.

    June 21, 2006 at 5:41 am
  • Martin Keller

    I have a table with 35 mil records. When I wanted to add a column (alter table) I would take about 2 days. Writing my own program in
    c# that prepared a file for import shortened this task to about 4 hours.

    August 14, 2006 at 2:09 pm
  • peter

    Hm. I’m actually quite surprised. ALTER TABLE normally rebuilds indexes by sort, so does LOAD DATA INFILE (Assuming we’re speaking about MyISAM table) so such difference is quite unexpected. Might be for some reason ALTER TABLE was doing index rebuild by keycache in your tests, this would explain it.

    ALTER TABLE and LOAD DATA INFILE should nowever look on the same settings to decide which method to use.

    August 14, 2006 at 2:15 pm
  • Daniel

    I’m just dealing with the same issue with a message system.

    At the moment I have one table (myisam/mysql4.1) for users inbox and one for all users sent items.
    Now the inbox table holds about 1 million row with nearly 1 gigabyte total. The
    sent items is the half.
    At this point it is working well with over 700 concurrent user.
    Now I’m doing a recode and there should be a lot more functions like own folders etc. and the queries will be a lot more complex.
    That’s why I’m now thinking about useful possibilities of designing the message table and about whats the best solution for the future.

    Is it really useful to have an own message table for every user?

    Thanks a lot for any suggestions.

    System: It’s now on a 2xDualcore Opteron with 4GB Ram/Debian/Apache2/MySQL4.1/PHP4/SATA Raid1)
    In near future I will have the Apache on a dedicated machine and the Mysql Server too (and the next step will be a Master/Slave server setup for the database).

    August 20, 2006 at 3:06 am
  • peter


    I would surely go with multiple tables. If it should be table per user or not depends on numer of users. For 1000 users that would work but for 100.000 it would be too many tables.

    I would have many to many mapping from users to tables so you can decide how many users you put per table later and I would also use composite primary keys if you’re using Innodb tables so data is clustered by user.

    August 20, 2006 at 5:28 am
  • Sergey


    I think you can give me some advise. I am running MySQL 4.1 on RedHat Linux. The box has 2GB of RAM, it has dual 2.8GHz Xeon processors, and /etc/my.cnf file looks like this.

    key_buffer = 512M
    max_allowed_packet = 8M
    table_cache = 512
    sort_buffer_size = 32M
    read_buffer_size = 32M
    read_rnd_buffer_size = 128M
    myisam_sort_buffer_size = 256M
    thread_cache = 32
    query_cache_size = 256M

    I am running data mining process that updates/inserts rows to the table (i.e. I do multifield select on indexed fields, and if row is found, I update the data, if not I insert new row). Up to about 15,000,000 rows (1.4GB of data) the procedure was quite fast (500-1000 rows per second), and then it started to slow down. Now I have about 75,000,000 rows (7GB of data) and I am getting about 30-40 rows per second.

    August 21, 2006 at 1:27 pm
  • peter

    Sergey, Would you mind posting your case on our forums instead at
    http://forum.mysqlperformanceblog.com and I’ll reply where. This way more users will benefit from your question and my reply.

    August 21, 2006 at 3:17 pm
  • Sergey August 21, 2006 at 4:21 pm
  • D.ZHU

    Hi peter,

    I’m doing a coding project that would result in massive amounts of data (will reach somewhere like 9billion rows within 1 year). supposing i’m completely optimized…

    * If i run a ‘select from where…’ query, how long is the query likely to take?
    * also how long would an insert take?

    I see you have in the example above, 30 millions of rows of data and a select took 29mins!

    * and how would i estimate such performance figures?

    November 7, 2006 at 8:05 pm
  • peter

    D Zhu,

    Answer depends on selectivity at large extent as well as if where clause is matched by index or full scan is performed.

    You also need to consider how wide are rows – dealing with 10 byte rows is much faster than 1000 byte rows.

    Regarding how to estimate I would do some benchmarks and match them against what you would expect and what you’re looking for. There are many design and configuration alternatives to deliver you what you’re looking for

    November 12, 2006 at 8:29 pm
  • Labus

    Normally MySQL is rather fast loading data in MyISAM table, but there is exception, which is when it can’t rebuild indexes by sort but builds them
    row by row instead. It can be happening due to wrong configuration (ie too small myisam_max_sort_file_size or myisam_max_extra_sort_file_size) or
    it could be just lack of optimization, if you’re having large (does not fit in memory) PRIMARY or UNIQUE indexes.

    Below is the internal letter I’ve sent out on this subject which I guessed would be good to share

    Today on my play box I tried to load data into MyISAM table (which was
    previously dumped as mysqldump –tab)

    The data was some 1.3G, 15.000.000 rows, 512MB memory one the box.
    MySQL 4.1.8

    One could could call it trivial fast task, unfortunately I had
    unique key on varchar(128) as part of the schema.

    The load took some 3 hours before I aborted it finding out it was just
    about 20% done.

    The problem is – unique keys are always rebuilt using key_cache, which
    means we’re down to some 100-200 rows/sec as soon as index becomes
    significantly larger than memory.

    You can’t go away with ALTER TABLE DISABLE KEYS as it does not affect
    unique keys.

    November 29, 2006 at 12:23 am
  • peter


    Yes that is the problem. As I mentioned sometime if you want to have quick build of unique/primary key you need to do ugly hack – create table without the index, load data, replace the .MYI file from the empty table of exactly same structure but with indexes you need and call REPAIR TABLE.

    REPAIR TABLE builds all indexes by sort.

    November 29, 2006 at 2:39 am
  • Len

    Google may use Mysql but they don’t necessarily have billions of rows – just because google uses MySQL doesn’t mean they actually use it for their search engine results. They have many little sections in their website you know. Laughably they even used PHP for one project. Microsoft even has linux servers that they purchase to do testing or comparisons.

    January 26, 2007 at 2:55 pm
  • peter

    The rumors are Google is using MySQL for Adsense. Yahoo uses MySQL for about anything, of course not full text searching itself as it just does not map well to relational database.

    January 27, 2007 at 12:47 am
  • Jay Pipes

    @Len: not quite sure what you’re getting at…other than being obtuse.

    If you’d like to know how and what Google uses MySQL for (yes, AdSense, among other things), come to the Users Conference in April (http://mysqlconf.com). 4 Googlers are speaking there, as is Peter. Also, I don’t understand your aversion to PHP… what about using PHP is laughable? Sounds to me you are just flame-baiting…

    February 21, 2007 at 6:06 pm