EmergencyEMERGENCY? Get 24/7 Help Now!

Indexes in MySQL

 | June 2, 2006 |  Posted In: Insight for DBAs


MySQL does not always make a right decision about indexes usage.
Condsider a simple table:

250001 (V1)

83036 (V2)
(execution time = 110 ms)

That is index selectivity by condition (ID1=1) is V2/V1 = 0.3321 or 33.21%

It is said (e.g. book “SQL Tuning”) if selectivity over 20% then a full table scan is preferable than an index access.
As far as I know Oracle alway chooses a full table scan if selectivity over 25%.

What with MySQL:

That is MySQL will use index for this query.

Let’s compare the execution time with index access and with table scan:

– 410 ms

– 200 ms

As you see the table scan is faster by 2 times.

Consider more extremal case: selectivity ~95%:

0.9492 = 94.92%

Explain still claims MySQL will use index.

Execution time:

– 1200 ms

– 260 ms

That is table scan is faster by 4.6 times.

Why does MySQL choose index access?
MySQL doesn’t calculate index selectivity, just estimates count of logical input/output operations, and for
our case count of Logical I/O for index access is less than for table scan.

So be careful with indexes, they help in not all cases.

Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.


  • Dear Pradeep,

    Try to select restricted data from the table.
    Because you can not show all 4 lacs of records at a time on the page,
    possibly use LIMIT or do pagination to your page.
    Also while selecting the data avoid using LIKE operator, search by = operator instead.


  • Actually, The problem is much more complicated than it looks. A while back I did benchmarks and depending on the situation I could get index being more optimal than full table scan even if 70% of rows would be accessed or Full tables can could be faster than retrieving 1% of rows by index – if they all end up in different locations on the disk. So MySQL is not optimal but 20% hard value would not be better ether.

    For wise decision MySQL would need to consider a lot of things including types of IO (seq vs random) cache efficiency, table size relative to memory size etc.

    In general much more complex cost model is required which means serious optimizer overhaul. Such changes are serious step as different optimizer will change a lot of execution plans, and some will surely be changed to worse as no optimizer is perfect in all cases. This makes it scary step besides optimizer being very complex peice of sofware.

  • According to the oreilly “oracle sql tuning” pocket guide oracle moves to a table scan if it expects to read more than 12% of the rows. supposedly mysql does so at 30%.

  • Well, maybe for Oracle it is 12%, I don’t know for sure, but as Peter said it is not always good to have the hard values, there can be a lot of cases.
    Regarding MySQL: MySQL does not use the selectivity calculation, so it’s impossible to say when MySQL prefers table scan.

  • One more thing to add – MySQL has to deal with multiple storage engines which complicates things a lot. For example for MEMORY tables there is very small penalty for “random IO” or Innodb tables which have full table scan being scan by primary index.

  • MySQL performance debate…

    I must admit after SPEC jAppServer2004™ results with MySQL , my interest rose about MySQL being a good candidate for high performance database application. If Google Adsense , Flick’r , Technorati and many others use it for years, can it be…

  • Folks, I am new to sql as was looking for some guidance on the following sql select statement

    Select AttributeValue FROM DT_Attribute a WHERE AttributeName = ‘cn’ AND
    ObjectID IN SELECT ObjectID FROM DT_Object o WHERE a.VersionID = o.VersionID
    AND ObjectType = ‘ncpServer’)

    Rows Data Length Index Length
    DT_Attribute 3,243,993 280.4 MB 157.8 MB

    DT_Object 79,828 291.2 MB 5.9 MB

    Running this resulted in 500,000,000 IO reads per hour on a 4-way 3.06 Ghz and 3GB computer.

    Is there a way to analyse such a SELECT statement?
    Is there an explanation why MySQLD-NT was solidly consuming 25% of the CPU rather than asking for more?

    Thanks in advance,


  • ::Is there an explanation why MySQLD-NT was solidly consuming 25% of the CPU rather than asking for more?::

    Sure, it’s because the database is disk bound. The CPU has to wait for the disk I/O to complete. It could mean your disks are slower than they should be. Switching to SCSI drives (if you’re not using them already) may help. Of course if the table was small enough (

  • Mike,

    Disk is one possible problem, the other reason (for 25% in particular) would be CPU bound workload using totally one CPU out of 4. Depending on how you set up graphing you may see combined CPU usage or per CPU.

  • Hello,
    Can you please help me to improve the MYSql Query Performance. Actually we have the data which is using near about 2,00,000 data . so please give me tips to improve this sql performance.

    Thanks in Advance

  • First, i’ve to say Great Post!

    I’ve a Real Estate web app, where some houses can be marked as “Distinguished”, something like “Featured”. So those property is shown in a special way.

    I’d like to look up in the database for all the properties with a given Feature. Supose there’s 3 kinds of feature (eg. Special, Great, Good), and every property has its own. If i’ve 500 properties, my selectivity is about 0,006, then a index wouldn’t be a good choice. But i still want to speed up my search, what can i do? I’ve been thinking to have 3 in-memory arrays containing the ids of the properties. One array for each feature. So, for example, i’d have the array of Special props, and would be like this:
    SpecialsProps = [1,15,52,355,61,123,561].

    Then if i need to search for all the special props, i would perform a “SELECT … WHERE id IN SpecialProps”, and then, the ID Primary Key, Unique Index would be use. But, in this case, doing so, i’d force to look several times for the index and the ids, and wouldn’t be faster than making a full scan (at least, that’s what i think). Another good strategy is, having all the properties cached, i could reference them directly.

    So, to finalize this comment, a simple question. Does MySQL have any index like the BitMap from Oracle?

    Thank you very much!

  • Ok, thanks for your quick answer Vadim. I’m taking a look at your book, it’s seems awesome. I’ll try to get it from Amazon, it’s a little dificult because i’m in Argentina, but i’ll give my best.

    So, do you think i could use one of the strategies i mentioned above?

  • hi all,

    i select data from table and show on one page but page load take more time bcz table have 4 lac records

    so i what do for quickly fetch data from table plz tell me any solution for it immediately any one


    pradeep kumar jangir

  • Hello,

    sorry in advance if I am wrong, but I was under the impression the SELECT COUNT(*) FROM mytable; was a very specific case when the table is MYSAM.

    Mysql will ALWAYS get the result straight from the statistic, instead of counting. Indeed , it will always be VERY fast (Mysql doc, cited from memory).

    So while you presentation may be acceptable in some cases, I would tend to think you missed the very point of count(*) for Myisam.

    Best regards


  • You can use fastbit bitmap index, from mysql, using fastbit UDF. Check this link https://github.com/greenlion/FastBit_UDF

    Some excerpt:

    About these UDF Functions and FastBit

    FastBit is a data store which implements WAH (word aligned hybrid) bitmap indexes. These UDF create, modify and query FastBit tables. The UDF treats a single directory on the filesystem as one FastBit table. Inside of the FastBit table/directory are directories representing partitions. The partitions are created automatically when data is loaded.

    All functions take as the first argument the table path/directory

    FastBit WAH bitmap indexes are optimal for multi-dimensional range scans, unlike b-tree indexes which are optimal only for one-dimensional queries. This means that FastBit can very efficiently handle queries that MySQL can not, like select c1 from table where c2 between 1 and 20 or c3 between 1 and 90 or c4 in (1,2,3). MySQL can not answer that query using a b-tree index and will resort to a full table scan.

    All columns of a fastbit table are automatically bitmapped indexed.

    FastBit WAH bitmap indexes are optimal for multi-dimensional range scans, unlike b-tree indexes which are optimal only for one-dimensional queries. This means that FastBit can very efficiently handle queries that MySQL can not, like select c1 from table where c2 between 1 and 20 or c3 between 1 and 90 or c4 in (1,2,3). MySQL can not answer that query using a b-tree index and will resort to a full table scan.

    The UDFs functions provided, are: fb_helper, fb_inlist, fb_create, fb_load, fb_query, fb_debug, fb_unlink, fb_delete, fb_insert, fb_insert2, fb_resort.

Leave a Reply


Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.