EmergencyEMERGENCY? Get 24/7 Help Now!

Store UUID in an optimized way

 | December 19, 2014 |  Posted In: MySQL

PREVIOUS POST
NEXT POST

A few years ago Peter Zaitsev, in a post titled “To UUID or not to UUID,” wrote: There is timestamp based part in UUID which has similar properties to auto_increment and which could be used to have values generated at same point in time physically local in BTREE index.”

For this post I’ve rearranged the timestamp part of UUID (Universal Unique Identifier) and did some benchmarks.

Many people store UUID as char (36) and use as row identity value (PRIMARY KEY) because it is unique across every table, every database and every server and allow easy merging of records from different databases. But here comes the problem, using it as PRIMARY KEY causes the problems described below.

Problems with UUID

  • UUID has 36 characters which makes it bulky.
  • InnoDB stores data in the PRIMARY KEY order and all the secondary keys also contain PRIMARY KEY. So having UUID as PRIMARY KEY makes the index bigger which can not be fit into the memory
  • Inserts are random and the data is scattered.

Despite the problems with UUID, people still prefer it because it is UNIQUE across every table, can be generated anywhere. In this blog, I will explain how to store UUID in an efficient way by re-arranging timestamp part of UUID.

Structure of UUID

MySQL uses UUID version 1 which is a 128-bit number represented by a utf8 string of five hexadecimal numbers

  • The first three numbers are generated from a timestamp.
  • The fourth number preserves temporal uniqueness in case the timestamp value loses monotonicity (for example, due to daylight saving time).
  • The fifth number is an IEEE 802 node number that provides spatial uniqueness. A random number is substituted if the latter is not available (for example, because the host computer has no Ethernet card, or we do not know how to find the hardware address of an interface on your operating system). In this case, spatial uniqueness cannot be guaranteed. Nevertheless, a collision should have very low probability.

The timestamp is mapped as follows:
When the timestamp has the (60 bit) hexadecimal value: 1d8eebc58e0a7d7. The following parts of the UUID are set:: 58e0a7d7eebc11d8-9669-0800200c9a66. The 1 before the most significant digits (in 11d8) of the timestamp indicates the UUID version, for time-based UUIDs this is 1.

Fourth and Fifth parts would be mostly constant if it is generated from a single server. First three numbers are based on timestamp, so they will be monotonically increasing. Lets rearrange the total sequence making the UUID closer to sequential. This makes the inserts and recent data look up faster. Dashes (‘-‘) make no sense, so lets remove them.
58e0a7d7-eebc-11d8-9669-0800200c9a66 => 11d8eebc58e0a7d796690800200c9a66

Benchmarking

I created created three tables

  • events_uuid – UUID binary(16) PRIMARY KEY
  • events_int – Additional BIGINT auto increment column and made it as primary key and index on UUID column
  • events_uuid_ordered – Rearranged UUID binary(16) as PRIMARY KEY

I created three stored procedures which insert 25K random rows at a time into the respective tables. There are three more stored procedures which call the random insert-stored procedures in a loop and also calculate the time taken to insert 25K rows and data and index size after each loop. Totally I have inserted 25M records.

    • Data Size
      Horizontal Axis – Number of inserts x 25,000
      Vertical Axis – Data Size in MB
      Data Size
      The data size for UUID table is more than other two tables.
    • Index Size
      Horizontal axis – Number of inserts x 25,000
      Vertical axis – Index Size in MB
      Index Size
    • Total Size
      Horizontal Axis – Number of inserts x 25,000
      Vertical Axis – Total Size in MB
      Total Size
    • Time taken
      Horizontal axis – Number of inserts x 25,000
      Vertical axis – Time Taken in seconds
      Time Taken

For the table with UUID as PRIMARY KEY, you can notice that as the table grows big, the time taken to insert rows is increasing almost linearly. Whereas for other tables, the time taken is almost constant.

The size of UUID table is almost 50% bigger than Ordered UUID table and 30% bigger than table with BIGINT as PRIMARY KEY. Comparing the Ordered UUID table BIGINT table, the time taken to insert rows and the size are almost same. But they may vary slightly based on the index structure.

Table Structure

Conclusions

 

    • Create function to rearrange UUID fields and use it

Inserts

Selects

    • Define UUID as binary(16) as binary does not have any character set

 

References

 

PREVIOUS POST
NEXT POST
Karthik Appigatla

Karthik joined Percona in September 2014 as a Remote DBA. His duties include handling remote DBA operations and collaborating closely with the team to evolve the RDBA offering. Karthik has 6+ years of experience in database administration. He began his career with Yahoo! as a Service Engineer (DevOps) and slowly converted into a MySQL DBA. After working for close to 5 years at Yahoo!, he moved to Pythian as a Database Consultant and got promoted to Lead Database Consultant. He holds certifications from RedHat (RHCSA & RHCE), Oracle (MySQL-5.5 OCP) and MongoDB (10GEN Certified DBA & Developer) Originally from the holy city Tirupati (India), recently relocated to Hyderabad(India) after marriage. Karthik plays cricket, chess and caroms during free time.

37 Comments

  • Definitely an interesting method Karthik.

    But, I think there’s a more efficient, performant and simple way of accomplishing the solution you’re trying to achieve. InnoDB tables, as you know use clustered indexes in the background, meaning the PRIMARY index is part of the actual row (even when no Primary Key is defined). It also supports a special column called ‘ai_col’ which when created as the first column in any table, set to an auto-incrementing unsigned integer, and flagged as the primary key will enable certain built in performance tweaks relating to the clustered index. All secondary indexes use the clustered index key (which is the important part)

    The benefit of the clustered index is that it saves disk io because the complete page/row is returned rather than just the key which requires a second read operation on the specified page to retrieve the actual data.

    In your solution the primary key would be a 16byte binary UUID, and all secondary indexes would be forced to use this meaning much larger index footprints, and I don’t know about you but most of our tables have multiple indexes. You’ve eliminated the IO overhead of inserting UUID’s and re-indexing, but you’ve increased the size of every index in the system as a side effect meaning both more physical and memory footprint.

    By using the ‘ai_col’ first column and using it as the primary key, it solves both of those problems. You would then just create a second column with the true UUID’s, add a UNIQUE index on it. Your rows will have a slightly larger size, but your indexes will be MUCH smaller, resulting in less IO, and a smaller footprint overall.

    You gain the benefits of an ordered insert key and thus low IO, you also gain the benefit of distributed UUID generation across all your app servers without any worry of collision in a cluster / multi-master environment, and you get to stick with standard UUID formatting which keeps things simple.

    I completely agree on the hyphens though, useless 🙂

    Kevin.

  • UUIDs come from many sources. Many of them do not use “UUID version 1”, like MySQL.

    Type 1 is recognizable by the “1” at the beginning of the third clump.

    Do not jump to the conclusion that you can rearrange any UUID to benefit from temporal clustering.

    Here is my blog on the topic:
    http://mysql.rjweb.org/doc.php/uuid

    @Kevin… Once the data, or index as you proposed, becomes too big to be cached, UUID lookups become I/O bound. Rearranging Type-1 UUIDs then becomes beneficial _if_ your accesses tend to be clustered in some time range, such as “recent events”.

  • If you are worried about the index size of the UUID field you can always switch to using a BINARY field and use UNHEX() and HEX() to convert your UUID on its way into and out of the table. So a 16 byte UUID would only take 8 bytes in BINARY:

    mysql> select HEX(UNHEX(‘1d8eebc58e0a7d7’));
    +——————————-+
    | HEX(UNHEX(‘1d8eebc58e0a7d7’)) |
    +——————————-+
    | 01D8EEBC58E0A7D7 |
    +——————————-+

    mysql> select LENGTH(UNHEX(‘1d8eebc58e0a7d7’));
    +———————————-+
    | LENGTH(UNHEX(‘1d8eebc58e0a7d7’)) |
    +———————————-+
    | 8 |
    +———————————-+

  • Tokudb, with its “fractal” indexing strategy builds the indexes in stages. In contrast, InnoDB inserts index entries “immediately” — actually that indexing is buffered by most of the size of the buffer_pool. To elaborate…

    When adding a record to an InnoDB table, here are (roughly) the steps performed to write the data (and PK) and secondary indexes to disk. (I leave out logging, provision for rollback, etc.) First the PRIMARY KEY and data:
    * Check for UNIQUEness constraints
    * Fetch the BTree block (normally 16KB) that should contain the row (based on the PRIMARY KEY).
    * Insert the row (overflow typically occurs 1% of the time; this leads to a block split).
    * Leave the page “dirty” in the buffer_pool, hoping that more rows are added before it is bumped out of cache (buffer_pool).. Note that for AUTO_INCREMENT and TIMESTAMP-based PKs, the “last” block in the data will be updated repeatedly before splitting; hence, this delayed write adds greatly to the efficiency. OTOH, a UUID will be very random; when the table is big enough, the block will almost always be flushed before a second insert occurs in that block. <– This is the inefficiency in UUIDs.
    Now for any secondary keys:
    * All the steps are the same, since an index is essentially a "table" except that the "data" is a copy of the PRIMARY KEY.
    * UNIQUEness must be checked immediately — cannot delay the read.
    * There are (I think) some other "delays" that avoid some I/O.

    Tokudb, on the other hand, does something like
    * Write data/index partially sorted records to disk before finding out exactly where it belongs.
    * In the background, combine these partially digested blocks. Repeat as needed.
    * Eventually move the info into the real table/indexes.

    If you are familiar with how sort-merge works, consider the parallels to Tokudb. Each "sort" does some work of ordering things; each "merge" is quite efficient.

    To summarize:
    * In the extreme (data/index much larger than buffer_pool), InnoDB must read-modify-write one 16KB disk block for each UUID entry.
    * Tokudb makes each I/O "count" by merging several UUIDs for each disk block. (Yeah, Toku rereads blocks, but it comes out ahead in the long run.)
    * Tokudb excels when the table is really big, which implies high ingestion rate.

    As for using a UDF instead of a Stored Function — CPU time is not the problem; it is I/O. Hence, the UDF won't help much.

  • You can pre-generate these uuids in javascript using this library which was open sourced by my employer, Fisdap: https://github.com/fisdap/innodb-optimized-uuid

  • I implemented this uuid generation method for PHP and Doctrine users. Works great with Laravel too in case anyone is interested. https://github.com/mbbender/doctrine-ordered-uuid-generator

  • @François Schiettecatte — HEX(UNHEX()) is a noop other than leading zeros and case folding.

    You can’t shrink a UUID to 8 bytes without losing information, and potentially causing “duplicate key”.

  • I tested this procedure in a NodeJS Async Whilst loop and am immediately hitting duplicate IDs.

    I cannot get past 100 inserts without this occurring with it sometimes occurring in as little as 2.

  • Awesome write up!

    If anyone needs a NodeJS / JavaScript class for creating and managing the binary(16) IDs rather than using the stored procedure checkout this repo:

    https://github.com/gregl83/monotonic-id

  • very interesting article.

    Do you know any performance difference in storing optimized UUID (11e5e1833a23ad4188733abca40787bc) hex to a decimal value (23790485847991417512368510810209093564)?

    e.g.
    3a23ad41-e183-11e5-8873-3abca40787bc => 11e5e1833a23ad4188733abca40787bc => 23790485847991417512368510810209093564

  • @psingh — I would expect the main difference to be a minor one… The field would need to be a different size. The bigger the data, the less cacheable it is, hence the more I/O, hence slower. There is essentially no difference if everything is cached; but it is a big difference in a table or index that is too big to be cached and indexed by UUID (hex or decimal). This difference is more noticeable in VARCHAR(36) vs BINARY(16), than for DECIMAL(40,0), which takes 18 bytes (barely more than 16). Storing the hex would be 32 bytes for CHAR(32) CHARSET ascii. (Or, big mistake, 96 for CHAR(32) CHARSET utf8.)

  • Great UUID perf improvement!
    I was wondering if there is a shorter version of this optimized UUID.
    MySQL has UUID_SHORT which is ordered but it is sequecial
    http://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid-short

    We could remove the Ethernet card number part UUID and use MySQL server number as UUID_SHORT does.
    Would the restult save in DB still be binary(16) ?

    • Perhaps the main reason for UUIDs is to be able to generate unique numbers on multiple servers simultaneously. (I see little need to do so in a single server; there are other, safer, smaller, ways.) UUID_SHORT uses @@server_id, which is (potentially) unique only within a replication topology; UUID’s MAC or Ethernet address is aimed at unrelated computers (such as clients).

  • Related: I put together a (slightly) more formalized version of a UUID (“Version 6”) which solves this same issue. https://bradleypeabody.github.io/uuidv6/ Could be useful in the cases described here.

  • I do not get why data size is bigger in the events_uuid case than events_int. The differences in index size, total size, insert time, I get: but you’re storing 16 bytes/record for events_uuid vs 24 bytes for events_int. Seems like data size would be the same as events_uuid_ordered and less than events_int.

  • I wonder a bit, why the fourth and fifth part of the original UUID are left untouched. Aren’t they pseudo-static, too? Why not move the changing parts over to the right? For example why not convert “AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE” to “DDDDEEEEEEEEEEEECCCCBBBBAAAAAAAA”? If that’s bad for indexing again, maybe someone could explain.

  • What about splitting the UUID across columns dedicated for each section ? Can’t be totally bad way.

    I can imagine splitting each character into its own column would be pretty bad, though.

  • I have implemented this scheme. Running locally on an iMac on MySQL 5.6 and in a MySQL 5.7 QA environment, the code ran well. Today, when it was deployed to production, the code hammered the MySQL 5.6 server. In fact, it seems like an INSERT into a table with this binary UUID column crushed the machine. The code had to be rolled back because it was killing the client server and the DB server. I need some clues as to what might cause the DB server to suddenly get hammered trying to create UUIDs like this.

    I am thinking this is responsible…

    uuid varbinary(16) DEFAULT NULL,

    UNIQUE KEY index_fast_zids_on_uuid (uuid) USING BTREE

    Your schema above does not use BTREE… what does it default to?

    • Martin… You are sure they were Type 1? And rearranged? Did you look at HEX(uuid) to see that they were roughly ordered as they were inserted? And the uuid was the PRIMARY KEY?

  • Hi Martin,

    Can you try removing the unique key constraint on uuid. That is a killer. If the application looks up based on uuid, you can create normal index and also change the column to binary from varbinary

    uuid binary(16) DEFAULT NULL,

    KEY index_fast_zids_on_uuid (uuid)

    By default, the index is BTREE.

    • UNIQUE keys (including PRIMARY KEY) must be checked as you insert the row. Action on non-unique keys are delayed; search for “change buffer” for further discussion. Each row inserted must do a potentially cached read of a block of each UNIQUE key, but does not need such for non-unique keys. However, eventually the read-modify-write for each non-unique index block (that needs changing) will be done. That is, UNIQUE keys are sluggish at INSERT time; non-unique keys may be sluggish later.

      The ultimate question is “will the next block be cached?” For a small index, it could be cached. Once the index is grows bigger than the buffer_pool, it becomes less and less likely to be cached, hence more reads. Reads are I/O; I/O is the biggest factor in performance.

      My point is that it is more complicated than simply saying that “uniqueness is the killer”.

      SPATIAL and FULLTEXT are the only indexes other than BTree in InnoDB; you must explicitly ask for them, else you get BTree.

Leave a Reply to Olivier DASINI Cancel reply