EmergencyEMERGENCY? Get 24/7 Help Now!

5.0.75-build12 Percona binaries

 | January 23, 2009 |  Posted In: Percona Software


After several important fixes to our patches we made binaries for build12.

Fixes include:

Control of InnoDB insert buffer to address problems Peter mentioned https://www.percona.com/blog/2009/01/13/some-little-known-facts-about-innodb-insert-buffer/, also check Bug 41811 to see symptoms of problem with Insert buffer.


* innodb_flush_neighbor_pages (default 1) – When the dirty page are flushed (written to datafile), this parameter determines whether the neighbor pages in the datafile are also flushed at the same time or not. If you use the storage which don’t have “head seek delay” (e.g. SSD or enough Write-Buffered), 0 may show better performance. 0:disable, 1:enable

* innodb_ibuf_max_size (default [the half of innodb_buffer_pool_size](bytes)) – This parameter is startup parameter. If the lower value is set than the half of innodb_buffer_pool_size, it is used as maximum size of insert buffer. To restrict to the too small value (e.g. 0) is not recommended for performance. If you don’t like the insert buffer growing bigger, you should use the following parameters instead. (* If you use very fast storage, small value (like several MB) may show better performance.)

* innodb_ibuf_accel_rate (default 100(%)) – This parameter is additional tuning the amount of insert buffer processing by background thread. Sometimes, only innodb_io_capacity is insufficient to tune the insert buffer.

* innodb_ibuf_active_contract (default 0) – By default (same to normal InnoDB), the each user threads do nothing about contracting the insert buffer until the insert buffer reaches its maximum size. 1 makes the each user threads positive to contract the insert buffer as possible in asynchronous.

Second important fix introduces variable use_global_long_query_time, which allows all current threads see change of long_query_time. By default value set in SET GLOBAL long_query_time=N command is visible only on new established connection, which is problem if you have pre-established connection pool, say in Java or Ruby on Rails application. With use_global_long_query_time=true even all current threads will respect SET GLOBAL long_query_time=N. The feature made for EngineYard, hosting provider for Ruby on Rails application.

You can download binaries (RPMS x86_64) and sources with patches here

Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.


  • Vadim,

    Will making innodb_ibuf_max_size=0 stop using insert buffer at all ? Is it possible to disable this functionality all together ?

    Also can you explain what exactly innodb_ibuf_accel_rate is because “this is additional tuning variable” is not very clear. Same about innodb_ibuf_active_contract – I do not understand what exactly this option does.

  • Peter,

    innodb_ibuf_max_size=0 does not disable insert buffer activity, there still will be operation to perform merge operations, and innodb_ibuf_max_size=0 can make it bad. Probably better to use innodb_ibuf_max_size=1M.

    Other options will comment Yasufumi.

  • I downloaded https://www.percona.com/mysql/5.0.75-b12/source/mysql-5.0.75-percona-b12-src.tar.gz, but each time I try to decompress it, I get an error:

    gzip: stdin: invalid compressed data–format violated
    tar: Unexpected EOF in archive
    tar: Unexpected EOF in archive
    tar: Error is not recoverable: exiting now

    Could it be that the archive is corrupted on the server, or is it a problem on my side?
    Thanks a lot.

  • Vadim,

    Can we make an option to disable insert buffer completely in case it is set to 0 – to basically have non unique indexes using the same code as unique indexes ? I think this will be valuable (or at least nice to research) for cases when random IO is cheap.

  • Peter,

    innodb_ibuf_max_size=0 never inserts to insert buffer. So, it is equal to disable insert buffer. But it make all of insert to secondary index synchronous (read IO) and all of the such inserts always must be slower. I never recommend innodb_ibuf_max_size=0. If you don’t like insert buffer growing, I recommend innodb_ibuf_max_size=[several MB] and innodb_ibuf_active_contract=1 instead.

    And, Currently, innodb_io_capacity affects to all of background IO activity (flush, insert buffer, etc…). I think it is insufficient of degree of freedom. Especially, insert buffer activities are not only affected by IO capacity. For now, innodb_ibuf_accel_rate is added to increase the insert buffer activity.

    The amount of insert buffer merge per 1 calling ibuf_contract_for_n_pages() from background:
    [default activity] * innodb_io_capacity(=%) * innodb_ibuf_accel_rate(%)

    I think we will need reordering these parameters someday.

  • Thanks Yasufumi,

    It would be good to check different values in benchmarks. I understand disabling insert buffer will make insert slower but it also mean no merge will be needed for future select.
    Unless there is significant IO saving by merges it is just pushes the job to other threads (or background thread)

  • Igor,

    Right. We do not have builds on Windows because we do not have customer demand for that. We would appreciate feedback though from someone building it on windows.

  • Hi Peter,

    I’ll give it a try.
    Can you give me your configuration otions (e.g. “configure WITH_INNOBASE_STORAGE_ENGINE…”), so I can try to build a comparable version?


  • Igor,

    configure line is
    ./configure ‘–disable-shared’ ‘–with-server-suffix=-percona-highperf-b12’ ‘–without-embedded-server’ ‘–with-innodb’ ‘–with-csv-storage-engine’ ‘–with-archive-storage-engine’ ‘–with-blackhole-storage-engine’ ‘–with-federated-storage-engine’ ‘–without-bench’ ‘–with-zlib-dir=bundled’ ‘–with-big-tables’ ‘–enable-assembler’ ‘–enable-local-infile’ ‘–with-mysqld-user=mysql’ ‘–with-unix-socket-path=/var/lib/mysql/mysql.sock’ ‘–with-pic’ ‘–prefix=/’ ‘–with-extra-charsets=complex’ ‘–exec-prefix=/usr’ ‘–libexecdir=/usr/sbin’ ‘–libdir=/usr/lib64’ ‘–sysconfdir=/etc’ ‘–datadir=/usr/share’ ‘–localstatedir=/var/lib/mysql’ ‘–infodir=/usr/share/info’ ‘–includedir=/usr/include’ ‘–mandir=/usr/share/man’ ‘–enable-thread-safe-client’ ‘–with-comment=MySQL Percona High Performance Edition (GPL)’ ‘–with-readline’ ‘CC=gcc’ ‘CFLAGS=-O2 -g’ ‘CXXFLAGS=-O2 -g’ ‘CXX=gcc’ ‘LDFLAGS=’

    you may also get it running mysqlbug from distribution.

  • Kevin,

    We have bunch of customers who are evaluating SSD to use in production, but I can’t recall one running in real production yet.

  • We are using SSDs in production, we already are collecting I/O and MySQL data from those servers with approx 8000 queries/sec. Could that kind of data be of interest for you?

  • is innodb_ibuf_max_size allocated inside innodb_buffer_pool_size or not? I believe it’s inside otherwise we need to leave 50% of buffer pool size memory just for the insert buffer.

Leave a Reply


Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.