If you’ve been reading enough database-related forums, mailing lists, or blogs you have probably heard complains about MySQL being unable to handle more than 1,000,000 (or select any other number) rows by some of the users. On the other hand, it is well known with customers like Google, Yahoo, LiveJournal, and Technorati, MySQL has installations with many billions of rows and delivers great performance. What could be the reason?
The reason is normally table design and understanding the inner works of MySQL. If you design your data wisely, considering what MySQL can do and what it can’t, you will get great performance. And if not, you might become upset and become one of those bloggers. Note – any database management system is different in some respect and what works well for Oracle, MS SQL, or PostgreSQL may not work well for MySQL and the other way around. Even storage engines have very important differences which can affect performance dramatically.
The three main issues you should be concerned if you’re dealing with very large data sets are Buffers, Indexes, and Joins.
First thing you need to take into account is fact; a situation when data fits in memory and when it does not are very different. If you started from in-memory data size and expect gradual performance decrease as the database size grows, you may be surprised by a severe drop in performance. This especially applies to index lookups and joins which we cover later. As everything usually slows down a lot once it does not fit in memory, the good solution is to make sure your data fits in memory as well as possible. This could be done by data partitioning (i.e. old and rarely accessed data stored in different servers), multi-server partitioning to use combined memory, and a lot of other techniques which I should cover at some later time.
So you understand how much having data in memory changes things, here is a small example with numbers. If you have your data fully in memory you could perform over 300,000 random lookups per second from a single thread, depending on system and table structure. Now if your data is fully on disk (both data and index) you would need 2+ IOs to retrieve the row – which means you get about 100 rows/sec. Note: multiple drives do not really help a lot as we’re speaking about single thread/query here. So the difference is 3,000x! It might be a bit too much as there are few completely uncached workloads, but 100+ times difference is quite frequent.
What everyone knows about indexes is the fact that they are good to speed up access to the database. Some people would also remember if indexes are helpful or not depends on index selectivity – how large the proportion of rows match to a particular index value or range. What is often forgotten about is, depending on if the workload is cached or not, different selectivity might show benefit from using indexes. In fact, even MySQL optimizer currently does not take it into account. For in-memory workload indexes, access might be faster even if 50% of rows are accessed, while for disk IO bound access we might be better off doing a full table scan even if only a few percent or rows are accessed.
Let’s do some computations again. Consider a table which has 100-byte rows. With decent SCSI drives, we can get 100MB/sec read speed which gives us about 1,000,000 rows per second for fully sequential access, with jam-packed rows – quite possibly a scenario for MyISAM tables. Now if we take the same hard drive for a fully IO-bound workload, it will be able to provide just 100 row lookups by index per second. The difference is 10,000 times for our worst-case scenario. It might be not that bad in practice, but again, it is not hard to reach 100 times difference.
Here is a little illustration I’ve created of the table with over 30 millions of rows. “val” column in this table has 10000 distinct value, so range 1..100 selects about 1% of the table. The times for full table scan vs range scan by index:
mysql> select count(pad) from large;
| count(pad) |
| 31457280 |
1 row in set (4 min 58.63 sec)
mysql> select count(pad) from large where val between 1 and 100;
| count(pad) |
| 314008 |
1 row in set (29 min 53.01 sec)
Also, remember – not all indexes are created equal. Some indexes may be placed in a sorted way or pages placed in random places – this may affect index scan/range scan speed dramatically. The rows referenced by indexes also could be located sequentially or require random IO if index ranges are scanned. There are also clustered keys in Innodb which combine index access with data access, saving you IO for completely disk-bound workloads.
There are certain optimizations in the works which would improve the performance of index accesses/index scans. For example, retrieving index values first and then accessing rows in sorted order can be a lot of help for big scans. This will reduce the gap, but I doubt it will be closed.
Joins are used to compose the complex object which was previously normalized to several tables, or perform complex queries finding relationships between objects. Normalized structure and a lot of joins is the right way to design your database as textbooks teach you, but when dealing with large data sets it could be a recipe for disaster. The problem is not the data size; normalized data normally becomes smaller, but a dramatically increased number of index lookups could be random accesses. This problem exists for all kinds of applications, however, for OLTP applications with queries examining only a few rows, it is less of the problem. Data retrieval, search, DSS, business intelligence applications which need to analyze a lot of rows run aggregates, etc., is when this problem is the most dramatic.
Some joins are also better than others. For example, if you have a star join with dimension tables being small, it would not slow things down too much. On the other hand, a join of a few large tables, which is completely disk-bound, can be very slow.
One of the reasons elevating this problem in MySQL is a lack of advanced join methods at this point (the work is on a way) – MySQL can’t do hash join or sort-merge join – it only can do nested loops method, which requires a lot of index lookups which may be random.
Here is a good example. As we saw my 30mil rows (12GB) table was scanned in less than 5 minutes. Now if we would do eq join of the table to other 30mil rows table, it will be completely random. We’ll need to perform 30 million random row reads, which gives us 300,000 seconds with 100 rows/sec rate. So we would go from 5 minutes to almost 4 days if we need to do the join. Some people assume join would be close to two full table scans (as 60mil of rows need to be read) – but this is way wrong.
Do not take me as going against normalization or joins. It is a great principle and should be used when possible. Just do not forget about the performance implications designed into the system and do not expect joins to be free.
Finally I should mention one more MySQL limitation which requires you to be extra careful working with large data sets. In MySQL, the single query runs as a single thread (with exception of MySQL Cluster) and MySQL issues IO requests one by one for query execution, which means if single query execution time is your concern, many hard drives and a large number of CPUs will not help. Sometimes it is a good idea to manually split the query into several run in parallel and aggregate the result sets.
So if you’re dealing with large data sets and complex queries here are few tips.
Try to fit data set you’re working with in memory – Processing in memory is so much faster and you have a whole bunch of problems solved just doing so. Use multiple servers to host portions of the data set. Store a portion of data you’re going to work with in temporary tables etc.
Prefer full table scans to index accesses – For large data sets, full table scans are often faster than range scans and other types of index lookups. Even if you look at 1% fr rows or less, a full table scan may be faster.
Avoid joins to large tables Joining of large data sets using nested loops is very expensive. Try to avoid it. Joins to smaller tables is OK but you might want to preload them to memory before join so there is no random IO needed to populate the caches.
With proper application architecture and table design, you can build applications operating with very large data sets based on MySQL.
More free resources that you might find useful
- Top most overlooked MySQL Performance Optimizations
- MySQL scaling and high availability – production experience from the last decade(s)
- How to analyze and tune MySQL queries for better performance
- Best practices for configuring optimal MySQL memory usage
- MySQL query performance – not just indexes
White Papers & eBooks
- Performance at scale: keeping your database on its toes
- Practical MySQL Performance Optimization Part 1, Part 2 and Part 3
Learn more about Percona Server for MySQL