More then 1000 columns – get transactional with TokuDB

Recently I encountered a specific situation in which a customer was forced to stay with the MyISAM engine due to a legacy application using tables with over 1000 columns. Unfortunately InnoDB has a limit at this point. I did not expect to hear this argument for MyISAM. It is usually about full text search or spatial indexes functionality that were missing in InnoDB, and which were introduced in MySQL 5.6 and 5.7, respectively, to let people forget about MyISAM. In this case though, InnoDB still could not be used, so I gave the TokuDB a try.

I’ve created a simple bash script to generate a SQL file with CREATE TABLE statement with the number of columns I desired and then tried to load this using different storage engines. Bit surprisingly, InnoDB failed with column count above 1017, so little more then documented maximum of 1000:

MyISAM let me to create maximum 2410 columns and I could achieve the same result for the TokuDB table! Tried with tinyint or char(10) datatype, same maximum cap applied, not quite sure why it’s exactly 2410 though.

So if you have that rare kind of table schema with that many columns and you wish to be able to use a transaction storage engine, you may go with TokuDB, available also with recent Percona Server 5.6 versions.

You can find more details about column number limits in MySQL in this post, “Understanding the maximum number of columns in a MySQL table.”

Share this post

Comments (3)

  • Reiner Rusch

    I would say the problem is more historical and was made wrong just from the beginning.
    What is a real fact which forces to use a 1000 column table???

    September 26, 2014 at 1:06 pm
  • Przemysław Malkowski

    In this case the real reason to use over 1000 columns was just an old application which still has to be used but is not going to be changed as it’s development life time has finished. I think this is quite common problem.
    On the other hand, I can imagine a very specific sensible use case when some scientific objects have thousands of properties, though indeed this would not be a common one at all 🙂
    My point was that it is great that we have this kind of choice in MySQL – for some cases InnoDB fits just great, for others TokuDB may be better (with “real” reasons, like excellent compression, disk IO bound workloads, etc).

    September 26, 2014 at 3:00 pm
  • Reiner Rusch

    Yes, ths problem is very common.
    The point is to learn from this and not to teach others doing same mistakes again and again.
    As a dba I see such things directly and it’s amazing what you could get out of an old machine by optimizing instead of just buying a newer one and to hope this would solve a problem (for a long term…)! 😉

    And yes, there’s a gap between what you are told in school about normalization and what your application afforts in real.
    This problem increases while more and more application uses the table in different ways.
    My solution is to measure and profile events again and again….!

    September 26, 2014 at 3:11 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.