EmergencyEMERGENCY? Get 24/7 Help Now!

More then 1000 columns – get transactional with TokuDB

 | September 25, 2014 |  Posted In: InnoDB, Insight for DBAs, Insight for Developers, MySQL, TokuDB

PREVIOUS POST
NEXT POST

Recently I encountered a specific situation in which a customer was forced to stay with the MyISAM engine due to a legacy application using tables with over 1000 columns. Unfortunately InnoDB has a limit at this point. I did not expect to hear this argument for MyISAM. It is usually about full text search or spatial indexes functionality that were missing in InnoDB, and which were introduced in MySQL 5.6 and 5.7, respectively, to let people forget about MyISAM. In this case though, InnoDB still could not be used, so I gave the TokuDB a try.

I’ve created a simple bash script to generate a SQL file with CREATE TABLE statement with the number of columns I desired and then tried to load this using different storage engines. Bit surprisingly, InnoDB failed with column count above 1017, so little more then documented maximum of 1000:

MyISAM let me to create maximum 2410 columns and I could achieve the same result for the TokuDB table! Tried with tinyint or char(10) datatype, same maximum cap applied, not quite sure why it’s exactly 2410 though.

So if you have that rare kind of table schema with that many columns and you wish to be able to use a transaction storage engine, you may go with TokuDB, available also with recent Percona Server 5.6 versions.

You can find more details about column number limits in MySQL in this post, “Understanding the maximum number of columns in a MySQL table.”

PREVIOUS POST
NEXT POST
Przemysław Malkowski

Przemek joined Support Team at Percona in August 2012. Before that he spent over five years working for Wikia.com (Quantcast Top 50) as System Administrator where he was a key person responsible for seamless building up MySQL powered database infrastructure. Besides MySQL he worked on maintaining all other parts of LAMP stack, with main focus on automation, monitoring and backups.

3 Comments

  • In this case the real reason to use over 1000 columns was just an old application which still has to be used but is not going to be changed as it’s development life time has finished. I think this is quite common problem.
    On the other hand, I can imagine a very specific sensible use case when some scientific objects have thousands of properties, though indeed this would not be a common one at all 🙂
    My point was that it is great that we have this kind of choice in MySQL – for some cases InnoDB fits just great, for others TokuDB may be better (with “real” reasons, like excellent compression, disk IO bound workloads, etc).

  • Yes, ths problem is very common.
    The point is to learn from this and not to teach others doing same mistakes again and again.
    As a dba I see such things directly and it’s amazing what you could get out of an old machine by optimizing instead of just buying a newer one and to hope this would solve a problem (for a long term…)! 😉

    And yes, there’s a gap between what you are told in school about normalization and what your application afforts in real.
    This problem increases while more and more application uses the table in different ways.
    My solution is to measure and profile events again and again….!

Leave a Reply