EmergencyEMERGENCY? Get 24/7 Help Now!

Announcing TokuDB v7.1

 | October 14, 2013 |  Posted In: Tokutek, TokuView

PREVIOUS POST
NEXT POST

Today we released TokuDB v7.1, which includes the following important features and fixes:

  • Added ability for users to view lock information via information_schema.tokudb_trx, information_schema.tokudb_locks, and information_schema.tokudb_lock_waits tables.
  • Changed the default compression to zlib and default basement node size to 64K.
  • Changed default analyze time to 5 seconds.
  • Added server variable to control amount of memory allocated for each bulk loader. In prior TokuDB versions each loader allocated 50% of the available TokuDB cache.
  • Changed table close behavior such that all data for the table remains in the cache (and is not flushed immediately).
  • Removed user reported stalls due to cache pressure induced by the bulk loader, lock tree escalation, and a particular open table stall.
  • Several bugs and behavioral issues reported by users.
  • Full details on the changes in TokuDB v7.1 can be found in the release notes, available from our documentation page.

 

PREVIOUS POST
NEXT POST

2 Comments

  • > default basement node size to 64K

    Isn’t basement node the fundamental unit of compression in TokuDB?

    If so, would reducing the basement node size reduces write performance and compression ratio?

    Also does this change increase point query performance?

    • The basement node is indeed the fundamental unit of compression in TokuDB. I’ve done quite a bit of benchmarking and measurement of the various basement node sizes and found that, in general, 64K is the ideal size for _most_ use cases. The compression algorithms we use tend to find their compression sweet spot somewhere between 32K and 64K. This parameter’s default can still be set in my.cnf, as well as in a user’s session to have larger and smaller values. A smaller basement node size also _may_ improve point query performance as less data needs to be de-compressed and de-serialized for the query operation.

      My rule of thumb is that zlib compression and 64K basement nodes is the right place to start. If you want extreme compression use lzma and 128K (or larger) basement nodes, just understand the query latency and CPU utilization trade-offs.

Leave a Reply

 
 

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.