How the number of columns affects performance

It is pretty understood the tables which have long rows tend to be slower than tables with short rows. I was interested to check if the row length is the only thing what matters or if the number of columns we have to work with also have an important role. I was interested in peak row processing speed so I looked at full table scan in case data fits in OS cache completely. I created 3 tables – First containing single tinyint column which is almost shortest type possible (CHAR(0) could be taking less space), table with 1 tinyint column and char(99) column and a table with 100 tinyint columns. The former two tables have the same row length but have the number of column different 50 times. Finally, I have created a 4th table which is also 100 columns but one of them is VARCHAR causes raw format to be dynamic.

More especially:

I populated each of the tables with 12M rows. Getting 7 bytes row size for the first table and 101 bytes for second and third.
I used a simple scan query: select max(t1) from t100; for the test.

The result was as follows:
t1 – 1.00 sec (12M rows/sec ; 80MB/sec)
t1c99 – 1.71 sec (7M rows/sec ; 676MB/sec)
t100 – 1.77 sec (7M rows/sec ; 653MB/sec)
t99v1 – 12.36 sec (1M rows/sec ; 93MB/sec)

This shows there is surely the problem with dynamic row format table with many columns. But is it because of large number of columns or dynamic format on its own is slave ?
I have tested yet another table structure:

This table has a row length of 20 (which was a bit of surprise to me) and it has:

t1v1 – 1.83 sec (6.5M rows/sec; 125M/sec)

So there is surely the penalty for dynamic rows, however, it is not very significant if the number of columns is small. For a large number of columns, dynamic rows become very expensive and you have to watch out.

I have not looked at the code and would appreciate any developers comments but I guess for dynamic rows tables certain conversion has to take place when internal data structures are populated (everything but TEXTs/BLOBs is fixed length when it is being processed). This conversion process depends on a number of columns while for fixed rows the MyISAM storage format matches internal one so you can basically do memory copy which does not depends on the number of columns.

Another interesting observation is access speed to different columns. the max(t1) and max(t99) were taking the same time which means there is no penalty for accessing column which is at the end of the table rather than at the start when it comes to MyISAM.

The common workaround working with such wide tables is to use covering indexes. I added one to t99v1 table and repeated the query:

As you can see the index scan is not as fast as table scan scanning about 3.7M rows/sec but which is still pretty fast.

So this is all about MyISAM, what is about Innodb? Here are results for Innodb with all data in the buffer pool, to measure peak speed as well

The results for Innodb were:
t1 – 5.11 sec (2.3M rows/sec)
t1c99 – 5.74 sec (2.1M rows/sec)
t100 – 15.16 sec (0.8M rows/sec)
t99v1 – 14.93 sec (0.8M rows/sec)
t1v1 – 5.26 sec (2.3M rows/sec)
t99v1 (covering idx) – 5.62 sec (2.1M rows/sec)

As you can see Innodb is a lot slower and has a behavior similar to Dynamic Row tables in both cases. This is because Innodb does not store data in native MyISAM format and conversion is needed in all cases. We can also see the table scan speed can be up to 5 times slower, for very short rows – some of this goes back to the fact Innodb rows have a lot of transaction control overhead attached to them.

Also note the covering index scan speed is very similar to full table scan speed – this is rather expected as table data is stored in BTREE index very similarly to how indexes are stored.

Summary: Beware of dynamic row format tables with many columns they might bite you with surprise slowdown. MyISAM is much faster than Innodb when it comes to in-memory full table scan.

P.S Tests were done on MySQL 5.4.2 on Intel(R) Xeon(R) CPU E5405 @ 2.00GHz CPU.

Share this post

Comments (15)

  • Matt

    “As you can see MyISAM is a lot slower and has behavior…” – I think you meant to say InnoDB there 🙂

    September 28, 2009 at 8:02 pm
  • peter


    Thanks corrected.

    September 29, 2009 at 8:30 am
  • Eli

    I had been meaning to test this issue (how the number of columns affect the performance) for quite some time now – and I thank you for saving me the trouble 🙂

    September 29, 2009 at 2:15 pm
  • Eli

    Oh, and by the way – your CAPTCHA has a problem with calculating 9 + 6 …
    To my knowledge, it’s 15, but it seems your CAPTCHA thinks otherwise 🙂

    September 29, 2009 at 2:16 pm
  • peter


    Sorry about captcha. Looks like some other glitch. What kind of error message did you get if any ?

    September 29, 2009 at 4:26 pm
  • Gunnar

    Hi Peter,

    Thanks for the benchmark.
    I wonder if the test is comparing apples to oranges by accident.

    The 4 test tables that you use have very different sizes.
    How much influence has the different size of the test tables on the results?
    Wouldn’t using tables with equal row size give a better to compare result?

    Take care

    September 30, 2009 at 2:14 am
  • peter


    This is why I specify MB/sec too 🙂

    The sizes are only different when I want them to be. t1c99, t100 and t99v1 all have same row length in MyISAM and yet they have different scan speed. Other tables are of different row length and mainly provided for comparison so we can both see how number and type of columns affect scan speed.

    September 30, 2009 at 9:48 am
  • Gunnar

    Hi Peter,

    >>Wouldn’t using tables with equal row size give a better to compare result?

    >This is why I specify MB/sec too
    >t1 – 1.00 sec (12M rows/sec ; 80MB/sec)
    >t1c99 – 1.71 sec (7M rows/sec ; 676MB/sec)
    >t100 – 1.77 sec (7M rows/sec ; 653MB/sec)
    >t99v1 – 12.36 sec (1M rows/sec ; 93MB/sec)

    I see. 🙂

    You said:
    > So there is surely the penalty for dynamic rows, however it is not very significant
    > if number of columns is small. For large number of columns dynamic rows become very expensive and you have to watch out.

    I fully agree with your conclusion that there is a penalty for “dynamic” rows.
    Looking at your numbers we see the result of 653MB/sec versus 93MB/sec.
    Which means that a static row is 7 times faster than a dynamic row.

    But I’m not sure if the number of columns is of any relevance or if its the row length.
    I would assume that a 2 column but longer row should get a huge penalty also.

    Could you be so kind and test what result are getting for a table like “t1v99”?
    CREATE TABLE t1v99 (
    t1 tinyint(3) UNSIGNED NOT NULL,
    v99 varchar(99) NOT NULL DEFAULT ‘12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789’

    Could we compare this result with “tac99” please?


    September 30, 2009 at 11:41 pm
  • Eli

    In reply to #5: The message is: “You have failed the challenge!” (it happened again right now, when I tried to answer: 8 + 0 = 8)

    October 11, 2009 at 1:39 pm
  • jcn50

    >t1 – 1.00 sec (12M rows/sec ; 80MB/sec)
    >t1c99 – 1.71 sec (7M rows/sec ; 676MB/sec)

    I think there is a typo on the first line, no? It should be:
    >t1 – 1.00 sec (12M rows/sec ; __ 800MB__ /sec)

    August 17, 2010 at 12:23 am
  • George

    I am designing a django site where there are 12 columns. Each of these columns needs to be searched together like where category=animals and orientation=landscape and etc etc because I am providing a search page with different dropdowns of each of these 12 options. Now the problem is I am a beginner with this and I am expecting the database to grow to millions of rows in future. The database inserts are less in the site max 500/day. Should I stay with 12 columns model or should I split it. If I search like I said above will it be cause performance drops. All the columns are charfield < 30 chars.

    January 31, 2013 at 8:41 am
  • Jim Mansir


    Do you think it would it be more efficient to have multiple ( 7-10 ) different tables and minimize the record count in each table or would it be faster to add a column to differentiate between my types of data and put them all in the same table. The only difference is the data type…all the rest of the columns would be the same. There would be 15 columns of data in the multiple column scenario and 16 if I had to add a data type column. We are probably looking at 2 million records total.



    May 25, 2013 at 4:13 pm
  • Sudhakar Narra

    thanks for explaining this 🙂

    November 29, 2013 at 7:25 am
  • Ulrik Lystbæk

    Im not an expert on the subject, but is the penality not the result of the decrease in the fanout. As the number of columns grows, each row will get larger, and the fanout of the table structure is usally depended on the pagesize whereas lesser fanout. This could be solved with creating an index of the columns you need, which will create a BTree with a fanout only affected by the needed columns (columns in the index). Like I said I am not an expert on field and havent tested if the new index would be affected by the column count in the table.

    December 13, 2013 at 6:25 am
  • Ramesh Naidu

    Hi Friends,

    I want save product with more than 25 variants in the table. so want add 25 columns in Table. i will store in that with Reference Key( Auto increment Value) . Please suggest . it any effect on the Performance

    October 20, 2014 at 3:45 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.