]]>
]]>

Feed aggregator

You are here

Percona Server with TokuDB (beta): Installation, configuration

Latest MySQL Performance Blog posts - June 26, 2014 - 9:18am

My previous post was an introduction to the TokuDB storage engine and aimed at explaining the basics of its design and how it differentiates from InnoDB/XtraDB. This post is all about motivating you to give it a try and have a look for yourself. Percona Server is not officially supporting TokuDB as of today, though the guys in the development team are working hard on this and the first GA release of Percona Server with TokuDB is looming on the horizon. However, there’s a beta version available now. For the installation tests in this post I’ve used the latest version of Percona Server alongside the accompanying TokuDB complement, which was published last week.

Installing Percona Server with TokuDB on a sandbox

One of the tools Percona Support Engineers really love is Giuseppe Maxia’s MySQL Sandbox. It allows us to setup a sandbox running a MySQL instance of our choice and makes executing multiple ones for testing purposes very easily. Whenever a customer reaches us with a problem happening on a particular version of MySQL or Percona Server that we can reproduce, we quickly spin off a new sandbox and test it ourselves, so it’s very handy. I’ll use one here to explore this beta version of Percona Server with TokuDB but if you prefer you can install it the regular way using a package from our apt experimental or yum testing repositories.

We start by downloading the tarballs from here: TokuDB’s plugin has been packaged in its own tarball, so there are two to download. Once you get them let’s decompress both and create a unified working directory, compressing it again to create a single tarball we’ll use as source to create our sandbox:

[nando@test01 ~]# tar zxvf Percona-Server-5.6.17-rel66.0-608.Linux.x86_64.tar.gz [nando@test01 ~]# tar zxvf Percona-Server-5.6.17-rel66.0-608.TokuDB.Linux.x86_64.tar.gz [nando@test01 ~]# tar cfa Percona-Server-5.6.17-rel66.0-608.Linux.x86_64.tar.gz Percona-Server-5.6.17-rel66.0-608.Linux.x86_64/

Before going ahead, verify if you have transparent huge pages enabled as TokuDB won’t run if it is set. See this documentation page for explanation on what this is and how to disable it on Ubuntu. In my CentOS test server it was defined in a slightly different place and I’ve done the following to temporarily disable it:

[nando@test01]# echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled [nando@test01]# echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag

We’re now ready to create our sandbox. The following command should be enough (I’ve chosen to run Percona Server on port 5617, you can use any other available one):

[nando@test01 ~]# make_sandbox Percona-Server-5.6.17-rel66.0-608.Linux.x86_64.tar.gz -- --sandbox_directory=tokudb --sandbox_port=5617

If the creation process goes well you will see something like the following at the end:

.... sandbox server started Your sandbox server was installed in $HOME/sandboxes/tokudb

You should now be able to access the MySQL console on the sandbox with the default credentials; if you cannot, verify the log-in $HOME/sandboxes/tokudb/data/msandbox.err:

[nando@test01 ~]# mysql --socket=/tmp/mysql_sandbox5617.sock -umsandbox -pmsandbox

Alternatively, you can make use of the “use” script located inside the sandbox directory, which employs the same credentials (configured in the client section of the configuration file my.sandbox.cnf):

[nando@test01 ~]# cd sandboxes/tokudb/ [nando@test01 tokudb]# ./use

First thing to check is if TokuDB is being listed as an available storage engine:

mysql> SHOW ENGINES; +--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+ | (...) | (...) | (...) | (...) | (...)| (...) | | TokuDB | YES | Tokutek TokuDB Storage Engine with Fractal Tree(tm) Technology | YES | YES | YES | | InnoDB | DEFAULT | Percona-XtraDB, Supports transactions, row-level locking, and foreign keys | YES | YES | YES | | (...) | (...) | (...) | NO | (...)| (...) | +--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+

If that’s not the case, you may need to load the plugins manually – I had to do so in my sandbox; you may not need if you’re installing it from a package in a fresh setup:

mysql> INSTALL PLUGIN tokudb SONAME 'ha_tokudb.so';

TokuDB should now figure in the list of supported ENGINES but you still need to activate the related plugins:

mysql> INSTALL PLUGIN tokudb_file_map SONAME 'ha_tokudb.so'; mysql> INSTALL PLUGIN tokudb_fractal_tree_info SONAME 'ha_tokudb.so'; mysql> INSTALL PLUGIN tokudb_fractal_tree_block_map SONAME 'ha_tokudb.so'; mysql> INSTALL PLUGIN tokudb_trx SONAME 'ha_tokudb.so'; mysql> INSTALL PLUGIN tokudb_locks SONAME 'ha_tokudb.so'; mysql> INSTALL PLUGIN tokudb_lock_waits SONAME 'ha_tokudb.so';

Please note the INSTALL PLUGIN action results in permanent changes and thus is required only once. No modifications to MySQL’s configuration file are required to have those plugins load in subsequent server restarts.

Now you should see not only the main TokuDB plugin but also the add-ons to the INFORMATION SCHEMA:

mysql> SHOW PLUGINS; +-------------------------------+----------+--------------------+--------------+---------+ | Name | Status | Type | Library | License | +-------------------------------+----------+--------------------+--------------+---------+ | (...) | (...) | (...) | (...) | (...) | | TokuDB | ACTIVE | STORAGE ENGINE | ha_tokudb.so | GPL | | TokuDB_trx | ACTIVE | INFORMATION SCHEMA | ha_tokudb.so | GPL | | TokuDB_locks | ACTIVE | INFORMATION SCHEMA | ha_tokudb.so | GPL | | TokuDB_lock_waits | ACTIVE | INFORMATION SCHEMA | ha_tokudb.so | GPL | | TokuDB_file_map | ACTIVE | INFORMATION SCHEMA | ha_tokudb.so | GPL | | TokuDB_fractal_tree_info | ACTIVE | INFORMATION SCHEMA | ha_tokudb.so | GPL | | TokuDB_fractal_tree_block_map | ACTIVE | INFORMATION SCHEMA | ha_tokudb.so | GPL | +-------------------------------+----------+--------------------+--------------+---------+

We are now ready to create our first TokuDB table – the only different thing to do here is to specify TokuDB as the storage engine to use:

mysql> CREATE TABLE test.Numbers (id INT PRIMARY KEY, number VARCHAR(20)) ENGINE=TokuDB;

Note some unfamiliar files lying in the datadir; the details surrounding those is certainly good material for future posts:

[nando@test01]# ls ~/sandboxes/tokudb/data auto.cnf _test_Numbers_main_3_2_19.tokudb ibdata1 _test_Numbers_status_3_1_19.tokudb ib_logfile0 tokudb.directory ib_logfile1 tokudb.environment log000000000000.tokulog25 __tokudb_lock_dont_delete_me_data msandbox.err __tokudb_lock_dont_delete_me_environment mysql __tokudb_lock_dont_delete_me_logs mysql_sandbox5617.pid __tokudb_lock_dont_delete_me_recovery performance_schema __tokudb_lock_dont_delete_me_temp tc.log tokudb.rollback test

Configuration: what’s really important

As noted by Vadim long ago, “Tuning of TokuDB is much easier than InnoDB, there’re only a few parameters to change, and actually out-of-box things running pretty well“:

mysql> show variables like 'tokudb_%'; +---------------------------------+------------------+ | Variable_name | Value | +---------------------------------+------------------+ | tokudb_alter_print_error | OFF | | tokudb_analyze_time | 5 | | tokudb_block_size | 4194304 | | tokudb_cache_size | 522651648 | | tokudb_check_jemalloc | 1 | | tokudb_checkpoint_lock | OFF | | tokudb_checkpoint_on_flush_logs | OFF | | tokudb_checkpointing_period | 60 | | tokudb_cleaner_iterations | 5 | | tokudb_cleaner_period | 1 | | tokudb_commit_sync | ON | | tokudb_create_index_online | ON | | tokudb_data_dir | | | tokudb_debug | 0 | | tokudb_directio | OFF | | tokudb_disable_hot_alter | OFF | | tokudb_disable_prefetching | OFF | | tokudb_disable_slow_alter | OFF | | tokudb_disable_slow_update | OFF | | tokudb_disable_slow_upsert | OFF | | tokudb_empty_scan | rl | | tokudb_fs_reserve_percent | 5 | | tokudb_fsync_log_period | 0 | | tokudb_hide_default_row_format | ON | | tokudb_init_flags | 11403457 | | tokudb_killed_time | 4000 | | tokudb_last_lock_timeout | | | tokudb_load_save_space | ON | | tokudb_loader_memory_size | 100000000 | | tokudb_lock_timeout | 4000 | | tokudb_lock_timeout_debug | 1 | | tokudb_log_dir | | | tokudb_max_lock_memory | 65331456 | | tokudb_pk_insert_mode | 1 | | tokudb_prelock_empty | ON | | tokudb_read_block_size | 65536 | | tokudb_read_buf_size | 131072 | | tokudb_read_status_frequency | 10000 | | tokudb_row_format | tokudb_zlib | | tokudb_tmp_dir | | | tokudb_version | tokudb-7.1.7-rc7 | | tokudb_write_status_frequency | 1000 | +---------------------------------+------------------+ 42 rows in set (0.00 sec)

The most important of the tokudb_ variables is arguably tokudb_cache_size. The test server where I ran those tests (test01) have a little less than 1G of memory and as you can see above TokuDB is “reserving” half (50%) of them to itself. That’s the default behavior but, of course, you can change it. And you must do it if you are also going to have InnoDB tables on your server – you should not overcommit memory between InnoDB and TokuDB engines. Shlomi Noach wrote a good post explaining the main TokuDB-specific variables and what they do. It’s definitely a worth read.

I hope you have fun testing Percona Server with TokuDB! If you run into any problems worth reporting, please let us know.

The post Percona Server with TokuDB (beta): Installation, configuration appeared first on MySQL Performance Blog.

Why %util number from iostat is meaningless for MySQL capacity planning

Latest MySQL Performance Blog posts - June 25, 2014 - 3:00am

Earlier this month I wrote about vmstat iowait cpu numbers and some of the comments I got were advertising the use of util% as reported by the iostat tool instead. I find this number even more useless for MySQL performance tuning and capacity planning.

Now let me start by saying this is a really tricky and deceptive number. Many DBAs who report instances of their systems having a very busy IO subsystem said the util% in vmstat was above 99% and therefore they believe this number is a good indicator of an overloaded IO subsystem.

Indeed – when your IO subsystem is busy, up to its full capacity, the utilization should be very close to 100%. However, it is perfectly possible for the IO subsystem and MySQL with it to have plenty more capacity than when utilization is showing 100% – as I will show in an example.

Before that though lets see what the iostat manual page has to say on this topic – from this main page we can read:

%util

Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100% for devices serving requests serially. But for devices serving requests in parallel, such as RAID arrays and modern SSDs, this number does not reflect their performance limits.

Which says right here that the number is useless for pretty much any production database server that is likely to be running RAID, Flash/SSD, SAN or cloud storage (such as EBS) capable of handling multiple requests in parallel.

Let’s look at the following illustration. I will run sysbench on a system with a rather slow storage data size larger than buffer pool and uniform distribution to put pressure on the IO subsystem. I will use a read-only benchmark here as it keeps things more simple…

sysbench –num-threads=1 –max-requests=0 –max-time=6000000 –report-interval=10 –test=oltp –oltp-read-only=on –db-driver=mysql –oltp-table-size=100000000 –oltp-dist-type=uniform –init-rng=on –mysql-user=root –mysql-password= run

I’m seeing some 9 transactions per second, while disk utilization from iostat is at nearly 96%:

[ 80s] threads: 1, tps: 9.30, reads/s: 130.20, writes/s: 0.00 response time: 171.82ms (95%)
[ 90s] threads: 1, tps: 9.20, reads/s: 128.80, writes/s: 0.00 response time: 157.72ms (95%)
[ 100s] threads: 1, tps: 9.00, reads/s: 126.00, writes/s: 0.00 response time: 215.38ms (95%)
[ 110s] threads: 1, tps: 9.30, reads/s: 130.20, writes/s: 0.00 response time: 141.39ms (95%)

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
dm-0 0.00 0.00 127.90 0.70 4070.40 28.00 31.87 1.01 7.83 7.52 96.68

This makes a lot of sense – with read single thread read workload the drive should be only used getting data needed by the query, which will not be 100% as there is some extra time needed to process the query on the MySQL side as well as passing the result set back to sysbench.

So 96% utilization; 9 transactions per second, this is a close to full-system capacity with less than 5% of device time to spare, right?

Let’s run a benchmark with more concurrency – 4 threads at the time; we’ll see…

[ 110s] threads: 4, tps: 21.10, reads/s: 295.40, writes/s: 0.00 response time: 312.09ms (95%)
[ 120s] threads: 4, tps: 22.00, reads/s: 308.00, writes/s: 0.00 response time: 297.05ms (95%)
[ 130s] threads: 4, tps: 22.40, reads/s: 313.60, writes/s: 0.00 response time: 335.34ms (95%)

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
dm-0 0.00 0.00 295.40 0.90 9372.80 35.20 31.75 4.06 13.69 3.38 100.01

So we’re seeing 100% utilization now, but what is interesting – we’re able to reclaim much more than less than 5% which was left if we look at utilization – throughput of the system increased about 2.5x

Finally let’s do the test with 64 threads – this is more concurrency than exists at storage level which is conventional hard drives in RAID on this system…

[ 70s] threads: 64, tps: 42.90, reads/s: 600.60, writes/s: 0.00 response time: 2516.43ms (95%)
[ 80s] threads: 64, tps: 42.40, reads/s: 593.60, writes/s: 0.00 response time: 2613.15ms (95%)
[ 90s] threads: 64, tps: 44.80, reads/s: 627.20, writes/s: 0.00 response time: 2404.49ms (95%)

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
dm-0 0.00 0.00 601.20 0.80 19065.60 33.60 31.73 65.98 108.72 1.66 100.00

In this case we’re getting 4.5x of throughput compared to single thread and 100% utilization. We’re also getting almost double throughput of the run with 4 thread where 100% utilization was reported. This makes sense – there are 4 drives which can work in parallel and with many outstanding requests they are able to optimize their seeks better hence giving a bit more than 4x.

So what have we so ? The system which was 96% capacity but which could have driven still to provide 4.5x throughput – so it had plenty of extra IO capacity. More powerful storage might have significantly more ability to run requests in parallel so it is quite possible to see 10x or more room after utilization% starts to be reported close to 100%

So if utilization% is not very helpful what can we use to understand our database IO capacity better ? First lets look at the performance reported from those sysbench runs. If we look at 95% response time you can see 1 thread and 4 threads had relatively close 95% time growing just from 150ms to 250-300ms. This is the number I really like to look at- if system is able to respond to the queries with response time not significantly higher than it has with concurrency of 1 it is not overloaded. I like using 3x as multiplier – ie when 95% spikes to be more than 3x of the single concurrency the system might be getting to the overload.

With 64 threads the 95% response time is 15-20x of the one we see with single thread so it is surely overloaded.

Do we have anything reported by iostat which we can use in a similar way? It turns out we do! Check out the “await” column which tells us how much the requester had to wait for the IO request to be serviced. With single concurrency it is 7.8ms which is what this drives can do for random IO and is as good as it gets. With 4 threads it is 13.7ms – less than double of best possible, so also good enough… with concurrency of 64 it is however 108ms which is over 10x of what this device could produce with no waiting and which is surely sign of overload.

A couple words of caution. First, do not look at svctm which is not designed with parallel processing in mind. You can see in our case it actually gets better with high concurrency while really database had to wait a lot longer for requests submitted. Second, iostat mixes together reads and writes in single statistics which specifically for databases and especially on RAID with BBU might be like mixing apples and oranges together – writes should go to writeback cache and be acknowledged essentially instantly while reads only complete when actual data can be delivered. The tool pt-diskstats from Percona Tookit breaks them apart and so can be much more for storage tuning for database workload.

Final note – I used a read-only workload on purpose – when it comes to writes things can be even more complicated – MySQL buffer pool can be more efficient with more intensive writes plus group commit might be able to commit a lot of transactions with single disk write. Still, the same base logic will apply.

Summary: The take away is simple – util% only shows if a device has at least one operation to deal with or is completely busy, which does not reflect actual utilization for a majority of modern IO subsystems. So you may have a lot of storage IO capacity left even when utilization is close to 100%.

The post Why %util number from iostat is meaningless for MySQL capacity planning appeared first on MySQL Performance Blog.

Sysbench Benchmarking of Tesora’s Database Virtualization Engine

Latest MySQL Performance Blog posts - June 24, 2014 - 5:00am

Tesora, previously called Parelastic, asked Percona to do a sysbench benchmark evaluation of its Database Virtualization Engine on specific architectures on Amazon EC2.

The focus of Tesora is to provide a scalable Database As A Service platform for OpenStack. The Database Virtualization Engine (DVE) plays a part in this as it aims at allowing databases to scale transparently across multiple MySQL shards.

DVE was open sourced last week. Downloads and source are already available on tesora.com

Some of the features include:

  • Transparent Sharding of data accross multiple storage nodes.
  • Applications can connect to DVE directly, using the MySQL Protocol, no code changes required.
  • Transactional and full ACID compliance with multiple storage nodes.
  • Storage Nodes can be added on an existing cluster.
Benchmarking Setup

Synthetic benchmarks were run using sysbench on different environments and different DVE architectures, as provided by Tesora. Architectures with 1 and 3 DVE nodes were benchmarked using up to 5 storage nodes.

The environments include a disk-bound dataset, the purpose of which is to create more insight into how large datasets might scale across multiple nodes. This type of use case is a common reason for companies to look into solutions like sharding.

A memory-bound dataset was also benchmarked to find out how DVE performs.

The memory-bound dataset was also used to compare a standalone MySQL Instance with a single DVE node using a single storage node. This provides more insight into the amount of overhead DVE creates at its most basic setup.

The remainder of this blog post gives a general overview on the findings of these benchmarks.
The full report, which includes more information on the configuration and goes deeper in it’s analysis of the results, can be downloaded here (PDF).

Scalability

OLTP – Disk Bound – Throughput:

OLTP – Disk Bound – Response Time:

In terms of scalability DVE lives up to its expectations as long as there is enough CPU available on the DVE nodes.

More complex transactions that query across multiple shards scale quite well. The explanation for that scalability, which is beyond linear, is that the available memory grows as storage nodes are added. The environment becomes less disk bound and performs better.

SELECT, INSERT, UPDATE – Disk Bound – Throughput:

SELECT, INSERT, UPDATE – Disk Bound – Response Time:

Single row reads and writes scale even better. This demonstrates that sharding has to be tailored towards both the data and the queries that will be executed across those tables.

Especially when using storage nodes with default EBS storage, disk performance is bad which makes the difference even larger.

The good thing about this solution is that storage and DVE nodes can be added, and the capacity of the whole system increases. No application code changes are necessary.

Running such an environment could come at a higher cost, but it could save a lot of resources and thus money on development. Of course, as a sharded architecture such as this are more complex compared to a non sharded architecture, the operational cost should not be ignored.

Overhead

Memory Bound Throughput:

Memory Bound Response Time:

The overhead of using DVE is noticeable, but much of it is related to the added network hop in the database layer and the CPU requirements of the DVE nodes.

DVE suffers as we go above 64 threads as you can see in the big increase in response time.

CPU Intensive

OLTP – Disk Bound:

DVE nodes are very CPU intensive. Throughout the analysis, the bottleneck is often caused by the CPU-constrained DVE nodes, this is very visible when looking at the benchmark results of p1_s3 and p1_s5, which use only one DVE node.

Even with simpler single-row, primary key-based SELECT statements, the overhead is noticeable.

Keep in mind that the hardware specifications of the DVE nodes (8 cores) were much higher than the database nodes (2 cores) itself. This makes the issue even more apparent.

Summary

Overall DVE looks very promising. When looking at the sysbench results, DVE seems to scale very well, provided that there are enough CPU resources available for the DVE nodes. You can download the full report here (PDF).

It will be interesting to see how much DVE scales on larger instance types. Other types of benchmarks should also be performed, such as linkbench. It is also important to understand the impact on the operational side, such as adding storage nodes, backups & recovery, high availability and how well it works transactions in various scenarios. Stay tuned.

The post Sysbench Benchmarking of Tesora’s Database Virtualization Engine appeared first on MySQL Performance Blog.

Getting to know TokuDB for MySQL

Latest MySQL Performance Blog posts - June 23, 2014 - 12:00am

During last April’s Percona Live MySQL Conference and Expo, TokuDB celebrated it’s first full-year as an open source storage engine. I still remember reading the official announcement and the expectations it created one year ago. The premises were very interesting as it had the potential of helping MySQL manage “big data” in a way InnoDB just couldn’t. It also provided additional interesting features like “hot schema changes,” all the while making our dear flash storages last longer.

While I’ve kept an eye on the evolution of TokuDB this past year, I reckon I haven’t given it a try…. until recently, when a beta version of Percona Server was released with support for TokuDB as a plugin.

If you haven’t tried TokuDB yet here’s an opportunity to do so. This first post looks into giving a bit of context about how TokuDB fits into MySQL and the next one, to be published in the following days, will document my experience in installing it with Percona Server. I hope you enjoy both and I would appreciate it if you’d take the time to add your comments and questions at the end of the post, so we can learn from each other.

The rise of InnoDB

As most of you know well, the heart of MySQL lies in the storage engine. InnoDB has completely changed MySQL, bringing not only transacional capabilities but also stability and new levels of maturity to the whole system. Even those that don’t really need transactions rejoice in the crash resistance strength of InnoDB. But do you remember that not so long ago InnoDB was a third-party proprietary plugin ? At first you would need to compile MySQL with it. Later they made it all easier by allowing the installation and load of the plugin into existing servers. But things really started to flourish when InnoDB went Open Source: it’s adoption increased and slowly it started gaining track and people’s thrust. With the code available for anyone to see, fix and extend, companies started committing their own resources into making InnoDB better, until it became the de facto storage engine of MySQL.

Balancing “big data” and storage cost

It’s true that today data stored (even compressed) into an InnoDB table takes more disk space compared to a similar MyISAM table but no one would expect there would be no trade-ins in the development of a new technology. At the same time, disk’s capacity were also increasing, which contributed to leverage the rate of bytes/$ and kind of compensated the gourmand needs of InnoDB.

But the increase in disk capacity also pushed the boundaries of what’s worth storing. What was for many the unlimited capacity of Gigabyte disks became, well, a limit, and then Terabyte disks developped into a “must-have”, a true need. At the same time though, with so many interesting things to see and surf, people’s attention became disputed and what was a commodity before transformed into a scarce good. Today, if a website takes more than a handful of seconds to load chances are it may loose the attention of some. SSD disks came to the rescue here, providing access to data in a fraction of the time a regular spindle disk would take. However, they don’t scale as well: the increase in cost for bytes/$ is proportional to the data access speed gains it brought and the longevity (or durability) of SSD disks is not as good, which makes for an expensive investment. It needs to be used wisely. For this reason, it’s becoming common to find a mixed use of fast and expensive SSD drives to store “popular” data and slower and cheaper spindle disks to store all the rest. Of course, this is a short term solution as it’s not that practical to maintain and requires a good amount of manual labor to decide which one stores what. In the long haul, it’s safe to predict SSD-based solutions will prosper as inexpensive storage but until then it is necessary to find a compromise between “big data” and hardware investment.

TokuDB’s premise

Another way of tackling this problem is changing the logical part. If one could store more data in the same amount of disk space and yet be able to store and retrieve it as fast, or even faster, then we would possibly get better results (in terms of performance) and a better return for the investment made in storage. That was the approach targeted by Tokutek in the development of the TokuDB storage engine. The core of it’s architecture is based in a different, modern indexing approach, baptized Fractal Tree Indexes (FTI). I say “different” because most popular storage engines such as MyISAM and InnoDB have a B-tree indexing foundation, which remained the somewhat “unchallenged” standard for the past three decades at least. And “modern” because it’s design take into account the write-intensive workloads we see rising more and more in contemporaneous data systems, as well as the “wear and tear” characteristic of the latest storage devices.

Both data structures are tree-based, store data in similar leaf nodes and make use of index key for ordering. But the way they manage and store data across the trees is different. TokuDB and its Fractal Tree structure make use of larger block sizes (bigger leaves) compared to InnoDB’s implementation of B-tree, which allows for better compression (the key for using less disk space) while also improving the performance of range queries. As important, TokuDB claims to make a better use of I/O by employing a message propagation system and an “optimal” buffering mechanism.

While in a traditional B-tree based system a change made in the table would reflect in an update in the index to acomodate it, TokuDB treats each change as a message at first. The interesting bit here is that even before the message reaches the respective leaf and modifies it, the changes it carries is already accounted by the database. It is like the database’s content is composed by the data found in the nodes plus the messages circulating in the tree. This brings agility to the storage engine and play an important role in providing Hot Schema Changes, for instance.

Regarding the optimized I/O buffering system, it is partly inherent to the use of bigger leaves. Or if you prefer, the other way around: the practical use of bigger leaves is made possible because buffers are used in a more efficient way. Efficiency here is measured according to bandwidth use. Remember, an I/O to disk is many times more expensive (in time) than an I/O to memory; that’s why buffers are used – you fill data into the buffer more frequently (less costly) so you can “flush” its content to disk less frequently (more costly). The fullest the buffer is when you flush it to disk, the more efficient use of bandwidth you’re doing. TokuDB’s tries to make the most out of it, “with hundreds or thousands of operations for a single I/O“. The problem of B-trees is that, by design, it’s difficult to implement an efficient buffering system and you tend to flush slightly filled buffers often. For this reason it is better to maintain smaller leafs in B-trees, which has the side effect of allowing for less good compression. Tim Callaghan, head of engineering at Tokutek,  explained those differences much better than I can at Percona Live London last November and his slides are available here.

One scenario that benefits from this optimized use of I/O is write-intensive applications. We have recently been using TokuDB with our Percona Cloud Tools (PCT) service to store and analyze slow query logs from MySQL servers. The compression benefits were also a driven reason in the choice of TokuDB as the storage engine for PCT, without which we would be much more limited about the number of organizations we could accomodate in this beta phase of the service. How big is the compression impact ? Like everything else in MySQL it depends of your schema. Shlomi Noach reported he was able to convert 4 Terabytes worth of uncompressed InnoDB data (or 2 Terabytes of compressed InnoDB data using KEY_BLOCK_SIZE=8) down to 200 Gygabytes. It may be that impressive.

Compression alone is a huge attractive characteristic of TokuDB but the storage engine also fits well in scenarios where storage space is not an issue. The optimization in I/O can help lagging replicas where writes (INSERTS) are the limiting factor, and not the network. The “hot schema changes” functionality can be a bless if you need to add a column to a big table, or a secondary index. There’s also the non-less important impact on flash drives durability. Mark Callaghan commented the following in a past post in this blog: “For pure-disk servers the advantages of TokuDB versus InnoDB are limited to a few workloads. For pure-flash servers the advantages of TokuDB are universal — 2X better compression (compared to InnoDB compression on my data) and larger (more sequential) writes means that you will buy much less flash and either it will last longer or you can buy less-expensive flash and it will last long enough“. And let’s not forget Vadim’s favorite feature in TokuDB: following query progress live in SHOW PROCESSLIST.

The future

Tokutek was clever in breaking with tradition and looking at the problem by another angle in the development of TokuDB. It bennefitted from the openess of MySQL and it’s storage engine API to implement a different solution, one that contemplates the reality of today – faster multi-core CPUs, modern but more “fragile” storage devices and a thirst for “big data.” Of course, it benefitted as well from observing how B-tree based storage engines coped with evolving data systems in the last decades and the development of new algorithms to come up with a new approach. And to make some things simpler along the way. It’s easier to tune TokuDB compared to InnoDB: I’ve counted 40 “tokudb_” variables while we find at least 100 more “innodb_“. But it has yet to endure the test of time. Even though we’re not talking about a brand-new storage engine (Vadim reported his first experiences with it 5 years ago) it has recently gone open source and community adoption is still in its initial phase, though steadily growing, as we can see by the number of opened bugs.

One thing that must worry many is the fact there’s no open source hot backup software for TokuDB. Even though there’s a community HotBackup API available on GitHub, which is a specification for a pluggable backup utility,” the only hot backup working solution available today is bundled in the Enterprise Edition of TokuDB. And since the design of TokuDB doesn’t allow for a backup approach based in copying the database files and then applying the logs containing the changes made in the database during the backup, which is how MySQL Enterprise Backup and Xtrabackup works, there’s no hope to easily extend an existing open source software such as Percona XtraBackup to include TokuDB.

Hopefully we’ll see a new open source backup software implementing the available API in the near future but for now it seems the community is left with filesystem-level snapshoot-based tools such as mylvmbackup and xfs_freeze as the sole alternatives to the proprietary solution.

The post Getting to know TokuDB for MySQL appeared first on MySQL Performance Blog.

How to setup Docker for Percona ClusterControl and add existing Percona XtraDB Cluster

Latest MySQL Performance Blog posts - June 20, 2014 - 8:12am

In my previous post I showed you how to setup Percona XtraDB Cluster 5.6 on Docker. This time I will show you how to setup Percona ClusterControl and add the existing Percona XtraDB Cluster 5.6 that we’ve managed to setup from the previous post.

Let us note the following details about our existing containers:

  • 172.17.0.2 dockerpxc1
  • 172.17.0.3 dockerpxc2
  • 172.17.0.4 dockerpxc3
  • 172.17.0.5 dockerccui-test

A quick tip for everyone who has followed my previous blog on setting up Percona XtraDB Cluster 5.6 on Docker: I did not install OpenSSH on the Docker instances on purpose and relied on ‘docker attach’ command to be able to get into each container. For this case however, we will need to install openssh-server in each container and make sure SSH is running as well as setup SSH key access for the Percona ClusterControl container to SSH into each Percona XtraDB Cluster node.

Create the Percona ClusterControl UI Docker container

We will need to create a docker container manually instead of building a container from a Dockerfile since we can’t run the ClusterControl installation non-interactively.

root@Perconallc-Support / # docker run --name dockerccui-test -p 80 -i -t ubuntu:12.04 bash

Notice that I had to add ‘-p 80′ to expose port 80 to the host network so we can access the Percona ClusterControl UI from a web browser, we will use Ubuntu 12.04 docker image. I will show you how to check the port that was dynamically allocated on the host network.

I would recommend to run ‘apt-get upgrade’ and ‘apt-get dist-upgrade’ just to make sure we have the latest software packages installed. Install wget and lsb-release packages since these are needed in the next steps.

Download and run the Percona ClusterControl installer and follow instructions on the prompt.

root@Perconallc-Support / # chmod +x install-cc.sh root@Perconallc-Support / # ./install-cc.sh

The installer will give you several options, one of which is to install Percona Server as Percona ClusterControl’s backend database, I’d highly recommend to choose ‘Yes’. If everything goes well you will need to continue setup of the Percona ClusterControl on the web browser. If you missed installing lsb-release earlier then you will get an error midway through the installation, but you can always install lsb-release package and re-run the installation.

To identify the exposed port on the host’s side we will need to verify it:

root@Perconallc-Support / # docker inspect dockerccui-test | grep HostPort "HostPort": "49154" "HostPort": "49154"

As we can see, port 49154 was dynamically allocated to the host network and mapped to port 80 on the docker instance. You may explicitly set the port mapping on the host network ‘-p {hostPort}:{containerPort}’, please consult official Docker documentation for further reading.

Setup Percona ClusterControl on the web browser

We can now access the Percona ClusterControl user interface through http://{Host}:49154/clustercontrol. Use the username (in email form) that you indicated during the installation and the default password ‘admin’ to log in.

We should see the following page after successfully logging in:

Percona ClusterControl Wizard

Select ‘Add an existing cluster’ and click Next then follow further instructions to get to the next page:

Add existing cluster

If all goes well you will be seeing the Database Clusters and can view your cluster nodes.

Percona ClusterControl UI on Docker container with Percona XtraDB Clusters 5.6 with each node on docker containers


Summary

In this blog I showed you how to setup Percona ClusterControl on Docker and adding an existing cluster to ClusterControl.

* Create a Docker container for Percona ClusterControl
* Download Percona ClusterControl and installed it
* Added existing Percona XtraDB Cluster on the Percona ClusterControl UI
* Profit!

You may also read the following blogs related to Percona ClusterControl:

For those who are new to Docker and containerization you may read through Patrick Galbraith’s blog series about Docker.

The post How to setup Docker for Percona ClusterControl and add existing Percona XtraDB Cluster appeared first on MySQL Performance Blog.

Using UDFs for geo-distance search in MySQL

Latest MySQL Performance Blog posts - June 19, 2014 - 7:44am

In my previous post about geo-spatial search in MySQL I described (along with other things) how to use geo-distance functions. In this post I will describe the geo-spatial distance functions in more details.

If you need to calculate an exact distance between 2 points on Earth in MySQL (very common for geo-enabled applications) you have at least 3 choices.

  • Use stored function and implement haversine formula
  • Use UDF (user defined function) for haversine (see below)
  • In MySQL 5.6 you can use st_distance function (newly documented), however, you will get the distance on plane and not on earth; the value returned will be good for sorting by distance but will not represent actual miles or kilometers.

MySQL stored function for calculating distance on Earth

I previously gave an example for a MySQL-stored function which implements the haversine formula. However, the approach I used was not very precise: it was optimized for speed. If you need a more precise haversine formula implementation you can use this function (result will be in miles):

delimiter // create DEFINER = CURRENT_USER function haversine_distance_sp (lat1 double, lon1 double, lat2 double, lon2 double) returns double begin declare R int DEFAULT 3958.76; declare phi1 double; declare phi2 double; declare d_phi double; declare d_lambda double; declare a double; declare c double; declare d double; set phi1 = radians(lat1); set phi2 = radians(lat2); set d_phi = radians(lat2-lat1); set d_lambda = radians(lon2-lon1); set a = sin(d_phi/2) * sin(d_phi/2) + cos(phi1) * cos(phi2) * sin(d_lambda/2) * sin(d_lambda/2); set c = 2 * atan2(sqrt(a), sqrt(1-a)); set d = R * c; return d; end; // delimiter ;

(the algorithm is based on the standard formula, I’ve used the well-known Movable Type scripts calculator)

This is a slower implementation as it uses arctangent, however it is more precise.  

MySQL UDF for Haversine distance

Another approach, which will give you much more performance is to use UDF. There are a number of implementations, I’ve used lib_mysqludf_haversine.

Here is the simple steps to install it in MySQL 5.6 (will also work with earlier versions):

$ wget 'https://github.com/lucasepe/lib_mysqludf_haversine/archive/master.zip' $ unzip master.zip $ cd lib_mysqludf_haversine-master/ $ make mysql> show global variables like 'plugin%'; +---------------+-------------------------+ | Variable_name | Value | +---------------+-------------------------+ | plugin_dir | /usr/lib64/mysql/plugin | +---------------+-------------------------+ 1 row in set (0.00 sec) $ sudo cp lib_mysqludf_haversine.so /usr/lib64/mysql/plugin/ mysql> CREATE FUNCTION haversine_distance RETURNS REAL SONAME 'lib_mysqludf_haversine.so'; mysql> select haversine_distance(37.470295464, -122.572938858498, 37.760150536, -122.20701914150199, 'mi') as dist_in_miles; +---------------+ | dist_in_miles | +---------------+ | 28.330467 | +---------------+ 1 row in set (0.00 sec)

Please note:

  • Make sure you have the mysql-devel or percona-server-devel package installed (MySQL development libraries) before installing.
  • You will need to specify the last parameter to be “mi” if you want to get the results in miles, otherwise it will give you kilometers.

MySQL ST_distance function

In MySQL 5.6 you can use ST_distance function:

mysql> select st_distance(point(37.470295464, -122.572938858498), point( 37.760150536, -122.20701914150199)) as distance_plane; +---------------------+ | distance_plane | +---------------------+ | 0.46681174155173943 | +---------------------+ 1 row in set (0.00 sec)

As we can see it does not give us an actual distance in mile or kilometers as it does not take into account that we have latitude and longitude, rather than X and Y on plane.

Geo Distance Functions Performance

The stored procedures and functions in MySQL are known to be slower, especially with trigonometrical functions. I’ve did a quick test, using MySQL function benchmark.

First I set 2 points (10 miles from SFO airport)

set @rlon1 = 122.572938858498; set @rlat1 = 37.470295464; set @rlon2 = -122.20701914150199; set @rlat2 = 37.760150536;

Next I use 4 function to benchmark:

  • Less precise stored function (haversine)
  • More precise stored function (haversine)
  • UDF for haversine
  • MySQL 5.6 native ST_distance (plane)

The benchmark function will execute the above function 100000 times.

Here are the results:

mysql> select benchmark(100000, haversine_old_sp(@rlat1, @rlon1, @rlat2, @rlon2)) as less_precise_mysql_stored_proc; +--------------------------------+ | less_precise_mysql_stored_proc | +--------------------------------+ | 0 | +--------------------------------+ 1 row in set (1.46 sec) mysql> select benchmark(100000, haversine_distance_sp(@rlat1, @rlon1, @rlat2, @rlon2)) as more_precise_mysql_stored_proc; +--------------------------------+ | more_precise_mysql_stored_proc | +--------------------------------+ | 0 | +--------------------------------+ 1 row in set (2.58 sec) mysql> select benchmark(100000, haversine_distance(@rlat1, @rlon1, @rlat2, @rlon2, 'mi')) as udf_haversine_function; +------------------------+ | udf_haversine_function | +------------------------+ | 0 | +------------------------+ 1 row in set (0.17 sec) mysql> select benchmark(100000, st_distance(point(@rlat1, @rlon1), point(@rlat2, @rlon1))) as mysql_builtin_st_distance; +---------------------------+ | mysql_builtin_st_distance | +---------------------------+ | 0 | +---------------------------+ 1 row in set (0.10 sec)

As we can see the UDF gives much faster response time (which is comparable to built-in function).

Benchmark chart (smaller the better)

Conclusion

The lib_mysqludf_haversine UDF provides a good function for geo-distance search in MySQL. Please let me know in the comments what geo-distance functions or approaches do you use in your applications.

The post Using UDFs for geo-distance search in MySQL appeared first on MySQL Performance Blog.

“How to monitor MySQL performance” with Percona Cloud Tools: June 25 webinar

Latest MySQL Performance Blog posts - June 18, 2014 - 6:58am

We recently released a new version of Percona Cloud Tools with MySQL monitoring capabilities. Join me June 25 and learn the details about all of the great new features inside Percona Cloud Tools – which is now free in beta. The webinar is titled “Monitoring All (Yes, All!) MySQL Metrics with Percona Cloud Tools” and begins at 10 a.m. Pacific time.

In addition to MySQL metrics, Percona Cloud Tools also monitors OS performance-related stats. The new Percona-agent gathers metrics with fine granularity (up to once per second), so you are able to see any of these metrics updated real-time.

During the webinar I’ll explain how the new Percona-agent works and how to configure it. And I’ll demonstrate the standard dashboard with the most important MySQL metrics and how to read them to understand your MySQL performance.

Our goal with the new implementation was to make installation as easy as possible. Seriously it should not take so much effort as it has in the past to get visibility into your MySQL performance. We also wanted to provide as much visibility as possible.

Please take a moment and register now for the webinar. I also encourage you, if you haven’t already, to sign up for access to the free Percona Cloud Tools beta ahead of time. At the end of the next week’s webinar you’ll know how to use the Percona-agent and will be able to start monitoring MySQL in less than 15 minutes!

See you June 25 and in the meantime you can check out our previous related posts: “From zero to full visibility of MySQL in 3 minutes with Percona Cloud Tools” and “Introducing the 3-Minute MySQL Monitor.”

The post “How to monitor MySQL performance” with Percona Cloud Tools: June 25 webinar appeared first on MySQL Performance Blog.

MySQL Backup Service from Percona

Latest MySQL Performance Blog posts - June 17, 2014 - 5:00am

The Percona Backup Service managed service launched today. It ensures properly configured backups run successfully as scheduled on customer provided backup storage – whether on premise, in the cloud, or a hybrid. Backup issues or production data recovery are efficiently handled by Percona Managed Services technicians with deep knowledge of MySQL.

As we state in our white papers, “Backup and recovery are a foundational piece of any infrastructure. A well-tested backup and recovery system can be the difference between a minor outage and the end of a business.” While MySQL backups are “foundational,” they still require constant management, and the most important use of a backup, namely recovery, is often complex.

The Percona MySQL Backup Service is available for companies using any variant of single-node or clustered MySQL — on premise or in the cloud. This includes MySQL, Percona Server, MariaDB, and Percona XtraDB Cluster.

Reliable MySQL Backup Services

Developed by MySQL experts with decades of operational experience, the Percona MySQL Backup Service is based on widely adopted open source database backup software solutions such as Percona XtraBackup. Monitoring software is installed to ensure MySQL backups run as expected and alert the Percona Managed Services team to any issues. Percona experts on duty 24x7x365 resolve problems before the availability of backups is impacted and can implement a full, partial, or point in time recovery to minimize downtime and data loss on production servers.

MySQL backup data sets are secure and compliant with regulatory requirements. 256 bit encryption meets or exceeds common security and compliance requirements. Internal procedures ensure that backups are managed with extreme care and are only stored safely on secure servers. Backups are available for on-demand recovery to comply with HIPAA, SOX, or PCI requirements.

The status of current and past backups is easily accessible through the Percona MySQL Backup Service customer portal. The portal includes instructions on how to use the backups to restore data for routine purposes such as restoration of development databases.

Efficient MySQL Data Recovery

Percona Backup Service technical experts respond within 30 minutes to ensure that production data is recovered quickly and as completely as possible. We can recover:

  • Data back to the last full capture image
  • Specific tables which saves significant time when only a single table or set of tables needs to be recovered
  • Full data to a specific point in time which ensures an application can be recovered to the same state as when the data was lost

Unlike database-as-a-service solutions, the Percona Backup Service can recover specific tables and full data to a specific point in time because we create additional backup points.

Cost Effective and Highly Flexible

Our MySQL backup-as-a-service solution costs less than managing backups in-house. Our 24x7x365 team serves multiple customers so the cost of the Percona MySQL Backup Service is lower than having someone on staff to manage backups. We also use a proprietary backup approach which significantly reduces the size of backup data sets, requiring much less storage space than conventional methods.

The Percona MySQL Backup Service accommodates any combination of MySQL server and backup server locations. We can work with MySQL server and backup servers on premise or in the cloud and can even configure the process to store one backup set on premise and another in the cloud for additional protection. This configuration flexibility means we can meet a wide range of data availability and budget needs.

MySQL Backup Service Features

The Percona Backup Service is designed so backups will run smoothly and reliably. The following features are included:

  • Customer portal for anytime access to current and past backup status as well as instructions on how to restore the MySQL backups for non-production purposes
  • Efficient data recovery for production issues with full, partial, and point in time recovery options
  • A high level of security with 256 bit encryption and backups only stored on the customer’s secure servers
  • Regulatory compliance with backups available for on-demand recovery to comply with HIPAA, SOX, and PCI requirements
  • Lower cost than managing MySQL backups in-house with 24x7x365 monitoring and issue resolution combined with a proprietary backup approach which significantly reduces the size of the backup data set versus conventional methods
  • Ability to accommodate any combination of MySQL server and backup server locations – on premise, in the cloud, or a hybrid
  • Flexible configuration options which enable the service to meet a wide range of data availability and budget requirements
Learn More

Learn more about our MySQL Backup Service solution as well as register for our upcoming “MySQL Backup and Recovery Best Practices” webinar on June 18, 2014. Contact us now to learn more about the Percona Backup Service and how we can ensure your backups are ready when you need them!

The post MySQL Backup Service from Percona appeared first on MySQL Performance Blog.

mydumper [less] locking

Latest MySQL Performance Blog posts - June 13, 2014 - 7:15am
In this post I would like to review how my dumper for MySQL works from the point of view of locks. Since 0.6 serie we have different options, so I will try to explain how they work

As you may know mydumper is multithreaded and this adds a lot of complexity compared with other logical backup tools as it also needs to coordinate all threads with the same snapshot to be consistent. So let review how mydumper does this with the default settings.

By default mydumper uses 4 threads to dump data and 1 main thread

Main Thread
  • FLUSH TABLES WITH READ LOCK
Dump Thread X
  • START TRANSACTION WITH CONSISTENT SNAPSHOT;
  • dump non-InnoDB tables
Main Thread
  • UNLOCK TABLES
Dump Thread X
  • dump InnoDB tables
As you can see in this case we need FTWRL for two things, coordinate transaction’s snapshots and dump non-InnoDB tables in a consistent way. So we have have global read lock until we dumped all non-InnoDB tables.What less locking does is this:Main Thread
  • FLUSH TABLES WITH READ LOCK
Dump Thread X
  • START TRANSACTION WITH CONSISTENT SNAPSHOT;
 LL Dump Thread X
  • LOCK TABLES non-InnoDB
Main Thread
  • UNLOCK TABLES
 LL Dump Thread X
  • dump non-InnoDB tables
  • UNLOCK non-InnoDB
Dump Thread X
  • dump InnoDB tables

So now the global read lock its in place until less-locking threads lock non-InnoDB tables, and this is really fast. The only downsize is that it uses double the amount of threads, so for the default (4 threads) we will end up having 9 connections, but always 4 will be running at the same time.

Less-locking really helps when you have MyISAM or ARCHIVE that are not heavily updated by production workload, also you should know that LOCK TABLE … READ LOCAL allows non conflicting INSERTS on MyISAM so if you use that tables to keep logs (append only) you will not notice that lock at all.

For the next release we will implement backup locks that will avoid us to run FTWRL.

The post mydumper [less] locking appeared first on MySQL Performance Blog.

Announcing Percona XtraBackup 2.2.3 GA

Latest MySQL Performance Blog posts - June 12, 2014 - 7:13am

Percona is glad to announce the release of Percona XtraBackup 2.2.3 on June 12th 2014. Downloads are available from our download site here and Percona Software Repositories.

Percona XtraBackup enables backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backup. The new 2.2.3 GA version offers improved performance, enterprise-grade security, and lower resource usage.

This release is the first GA (Generally Available) stable release in the 2.2 series.

New Features:

  • Percona XtraBackup implemented support for Backup Locks.
  • Percona XtraBackup can now store backup history on the server itself in a special table created for that purpose.
  • Percona XtraBackup has been rebased on MySQL 5.6.17.

Bugs Fixed:

  • Fixed the InnoDB redo log incompatibility with 5.1/5.5 server and compressed tables which was introduced by the upstream fix in MySQL 5.6.11 that could make InnoDB crash on recovery when replaying redo logs created on earlier versions. Bug fixed #1255476.
  • Percona XtraBackup did not flush the InnoDB REDO log buffer before finalizing the log copy. This would only become a problem when the binary log coordinates were used after restoring from a backup: the actual data files state after recovery could be inconsistent with the binary log coordinates. Bug fixed #1320685.
  • innobackupex now sets wsrep_causal_reads to 0 before executing FLUSH TABLES WITH READ LOCK if the server is a member of the Galera cluster. Bug fixed #1320441.
  • storage/innobase/xtrabackup/CMakeLists.txt now honors the XB_DISTRIBUTION environment variable when configuring innobackupex.pl to innobackupex. Bug fixed #1320856.
  • Percona XtraBackup does not add XtraDB-specific fields when preparing a
    backup of an upstream MySQL server. Bug fixed #988310.
  • Information about backup type and parameters is now stored in the
    xtrabackup_info file in the backup directory. Bug fixed #1133017.
  • When backup locks are used, xtrabackup_slave_info should be written under BINLOG lock instead of TABLE lock. Bug fixed #1328532.

Release notes with all the bugfixes for Percona XtraBackup 2.2.3 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Announcing Percona XtraBackup 2.2.3 GA appeared first on MySQL Performance Blog.

Percona Server 5.6.17-66.0 is now available

Latest MySQL Performance Blog posts - June 11, 2014 - 11:20am

 

Percona is glad to announce the release of Percona Server 5.6.17-66.0 on June 11, 2014. Downloads are available here and from the Percona Software Repositories.

Based on MySQL 5.6.17, including all the bug fixes in it, Percona Server 5.6.17-66.0 is the current GA release in the Percona Server 5.6 series. All of Percona’s software is open-source and free, all the details of the release can be found in the 5.6.17-66.0 milestone at Launchpad.

 

New Features:

  • Percona Server has ported MariaDB code enhancement for start transaction with consistent snapshot.
  • Percona Server has implemented the ability to make a clone of a snapshot created by START TRANSACTION WITH CONSISTENT SNAPSHOT in another session.
  • TokuDB Storage engine is now available as a separate package that can be installed along with the Percona Server 5.6.17-66.0. This feature is currently considered Release Candidate quality.
  • SQL-bench has been made compatible with the TokuDB storage engine.
  • Percona Server 5.6 now includes HandlerSocket in addition to Percona Server 5.5.

Bugs Fixed:

  • Fix for #1225189 introduced a regression in Percona Server 5.6.13-61.0 which could lead to an error when running mysql_install_db. Bug fixed #1314596.
  • InnoDB could crash if workload contained writes to compressed tables. Bug fixed #1305364.
  • GUI clients such as MySQL Workbench could not authenticate with a user defined with auth_pam_compat plugin. Bug fixed #1166938.
  • Help in Percona Server 5.6 command line client was linking to Percona Server 5.1 manual. Bug fixed #1198775.
  • InnoDB redo log resizing could crash when XtraDB changed page tracking was enabled. Bug fixed #1204072.
  • Audit Log wasn’t parsing escape characters correctly in the OLD format. Bug fixed #1313696.
  • Percona Server version was reported incorrectly in Debian/Ubuntu packages. Bug fixed #1319670.

Other bugs fixed: #1318537 (upstream #72615), #1318874, #1285618, #1272732, #1314568, #1271178, and #1323014.

Release notes for Percona Server 5.6.17-66.0 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.6.17-66.0 is now available appeared first on MySQL Performance Blog.

Measure the impact of MySQL configuration changes with Percona Cloud Tools

Latest MySQL Performance Blog posts - June 11, 2014 - 12:00am

When you make a change to your MySQL configuration in production it would be great to know the impact (a “before and after” type of picture). Some changes are obvious. For many variables proper values can be determined beforehand, i.e. innodb_buffer_pool_size or innodb_log_file_size. However, there is 1 configuration variable which is much less obvious for many people working with MySQL: query_cache.

The idea of query cache is great, however, there are a lot of issues with MySQL query cache, one of the most important issues is query cache mutex which can cause a severe contention on the CPU bound workloads. In MySQL 5.6 you can remove the mutex by disabling the query cache completely (set query_cache_type=0).

There is a big issue with disabling query_cache though and it is not a technical issue. The issue is how do you convince your boss or dev team/operations team that query cache is bad. Almost all measurements available in MySQL will tell you that the query cache is utilized: there will be a good number of Qcache_hits. On the other hand you may see the contention in processlist: for example you can see one of those states associates with query cache contention:  Waiting for query cache lock or Waiting on query cache mutex (if you are running Percona Server).

Now you use Percona Cloud Tools (with Percona Server) to actually see how query cache is used globally and on per query basis.

Query cache visualization

Percona Server has this very useful additional feature: if you set log_slow_verbosity”  option to “microtime, query_plan, innodb” it will also log the information if the query was a query cache hit or query cache miss. This can give you a valuable inside for the query cache real utilization.

Percona Cloud Tools will be really helpful here as it will visualize this information and will actually see “before and after” graph. (It will set “log_slow_verbosity” option to “microtime, query_plan, innodb” for you if you set Slow log verbosity = Full in the agent configuration options).

Here is the example. I’ve disabled the query cache and then looked at the total query and lock time.

As we can see the total query time (across all queries) decreased significantly. The Average QPS on this picture is actually a little bit misleading and should be named “Queries running at the same time” (it is calculated as query count / time). The above graphs shows clear contention on the query cache level.

Just to confirm, we can look at the number of query cache hits:

The number of Query Cache “hits” dropped to 0 when we disabled the query cache.

Please note: if your workload is readonly and you are running the same complex query over and over again, query cache may be helpful. Usually, in normal circumstances however, the query cache can cause contention and reduce the response time (as you can see above).

Percona Cloud Tools is a free beta, please give it a try and share your experience in the comments. If you want to learn more about it, join the June 25 webinar hosted by Vadim Tkachenko titled, “Monitoring All (Yes, All!) MySQL Metrics with Percona Cloud Tools.” The webinar, which starts at 10 a.m. Pacific time, is free but I recommend registering now to reserve your spot.

 

The post Measure the impact of MySQL configuration changes with Percona Cloud Tools appeared first on MySQL Performance Blog.

Architecture and Design of MySQL-powered applications: June 11 Webinar

Latest MySQL Performance Blog posts - June 9, 2014 - 1:21pm

The architecture of MySQL-powered applications is one of my favorite topics to talk about. It’s a very important topic because if you do not get the architecture right then you’re very likely to fail with your project – either from the standpoint of failing with performance, high availability or security requirements… or failing to deliver on time and at the planned cost.

It’s also a great topic because there is so much knowledge available these days about MySQL-powered applications. MySQL has been around for a rather long time compared with many other solutions – and now we know what architectures have enabled people to build successful MySQL-powered applications and grow them to hundreds of millions of users as well as what applications did not work out.

This level of maturity really allows us to essentially take “off-the-shelf” MySQL architectures that can be used to build very successful applications with very low risk, at least when it comes to the database backend.

June 11 at 10 a.m. Pacific

On Wednesday, June 11 at 10 a.m. Pacific I’ll be leading  a webinar titled, “Architecture and Design of MySQL Powered Applications” during which I’ll be covering a lot of high-level scenarios such as “architectures that have been proven to be successful.” I’ll also discuss the architectures best suited for different kinds of applications and different kind of teams. Additionally, I’ll address what’s changed in the MySQL space in the last few years and how those changes have impacted modern MySQL architecture design approaches.

I hope to see you June 11. Register now to reserve your spot!

The post Architecture and Design of MySQL-powered applications: June 11 Webinar appeared first on MySQL Performance Blog.

Using MySQL 5.6 Performance Schema in multi-tenant environments

Latest MySQL Performance Blog posts - June 9, 2014 - 7:33am

Hosting a shared MySQL instance for your internal or external clients (“multi-tenant”) was always a challenge. Multi-tenants approach or a “schema-per-customer” approach is pretty common nowadays to host multiple clients on the same MySQL sever. One of issues of this approach, however, is the lack of visibility: it is hard to tell how many resources (queries, disk, cpu, etc) each user will use.

Percona Server contains userstats Google patch, which will allow you to get the resource utilization per user. The new MySQL 5.6 performance_schema has even more instrumentation which can give you a better visibility on per-user or per-schema/per-database level. And if you are running MySQL 5.6.6 or higher, Performance Schema is already enabled (with minimum set of “instrumentation” thou) and ready to use. In this post I will share examples of using Performance Schema for multi-tenant environments.

Overview

If you want to use Performance Schema, make sure it is enabled (enabled by default starting with MySQL 5.6.6):

mysql> show global variables like 'performance_schema'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | performance_schema | ON | +--------------------+-------+ 1 row in set (0.00 sec)

Performance_schema provides you with the “raw” metrics and it may be difficult to select data from it. The good news is that you can use the “sys” schema project by Mark Leith. The “sys” schema (old name: ps_helper) is a collection of views and stored procedures which will provide you with reports in human readable format. The installation is easy, download it from github and run:

$ mysql -u root -p < ./sys_56.sql

(it will only create database “sys” and a set of views and stored procedures/stored functions in it)

Examples

For the multi-tenant environment the most interesting is resource utilization breakdown. Lets say you want to “charge per IO” similar to Amazon RDS for MySQL model. You can now run this simple query against “sys” schema, which will use Performance Schema in MySQL 5.6 to generate report (in my example all users starts with “a”):

mysql sys> select * from user_summary_by_file_io where user like 'a%'; +-------+------------+-------------+ | user | ios | io_latency | +-------+------------+-------------+ | api01 | 3061469563 | 4.04h | | api03 | 912296937 | 1.36h | | api02 | 815473183 | 1.22h | | app01 | 9704381 | 00:01:06.33 | | app02 | 1160149 | 8.18 s | | app03 | 1424065 | 7.23 s | +-------+------------+-------------+ 6 rows in set (0.06 sec)

If you need more extensive metrics you can use this report:

mysql sys> select * from user_summary where user like 'a%' limit 10; +-------+------------+-------------------+-----------------------+-------------+------------+-----------------+---------------------+-------------------+--------------+ | user | statements | statement_latency | statement_avg_latency | table_scans | file_ios | file_io_latency | current_connections | total_connections | unique_hosts | +-------+------------+-------------------+-----------------------+-------------+------------+-----------------+---------------------+-------------------+--------------+ | api01 | 1837898366 | 192.29h | 376.65 us | 2221018 | 3064724488 | 4.05h | 22 | 73748519 | 1 | | api02 | 833712609 | 206.83h | 893.11 us | 1164 | 836753563 | 1.26h | 7 | 246730 | 1 | | api03 | 988875004 | 222.03h | 808.29 us | 1156 | 943363030 | 1.43h | 8 | 247639 | 1 | | app01 | 1864159 | 4.01h | 7.75 ms | 46756 | 9704623 | 00:01:06.33 | 0 | 300004 | 1 | | app02 | 812520 | 00:20:51.69 | 1.54 ms | 42513 | 1164954 | 8.37 s | 0 | 122994 | 1 | | app03 | 811558 | 00:21:59.57 | 1.63 ms | 42310 | 1429373 | 7.40 s | 0 | 123110 | 1 | +-------+------------+-------------------+-----------------------+-------------+------------+-----------------+---------------------+-------------------+--------------+ 6 rows in set

Or a breakdown per user per statement:

mysql sys> select user, statement, total, rows_sent, rows_examined, rows_affected, full_scans from user_summary_by_statement_type where user = 'api01' limit 100; +-------+-------------+-----------+-----------+---------------+---------------+------------+ | user | statement | total | rows_sent | rows_examined | rows_affected | full_scans | +-------+-------------+-----------+-----------+---------------+---------------+------------+ | api01 | insert | 368405774 | 0 | 0 | 494179199 | 0 | | api01 | commit | 376276686 | 0 | 0 | 0 | 0 | | api01 | select | 476129532 | 761038004 | 1067363348 | 0 | 0 | | api01 | update | 142896430 | 0 | 142896437 | 141978066 | 0 | | api01 | begin | 376280424 | 0 | 0 | 0 | 0 | | api01 | show_fields | 2215016 | 310102240 | 310102240 | 0 | 2215016 | | api01 | Quit | 73421361 | 0 | 0 | 0 | 0 | | api01 | change_db | 17258767 | 0 | 0 | 0 | 0 | | api01 | delete | 7896 | 0 | 7896 | 7896 | 0 | | api01 | rollback | 3711 | 0 | 0 | 0 | 0 | +-------+-------------+-----------+-----------+---------------+---------------+------------+ 10 rows in set (0.04 sec)

If you are using  “schema per customer” approach you get get the per-schema resource utilization using sys.schema_table_statistics. Example (from Peter Zaitsev’s webinar):

mysql [localhost] {msandbox} > select * from schema_table_statistics where table_schema='sbtest' limit 1 G *************************** 1. row *************************** table_schema: sbtest table_name: sbtest rows_fetched: 158764154 fetch_latency: 1.37h rows_inserted: 378901 insert_latency: 00:07:17.38 rows_updated: 1136714 update_latency: 00:45:40.08 rows_deleted: 378902 delete_latency: 00:03:00.34 io_read_requests: 636003 io_read: 9.70 GiB io_read_latency: 00:28:12.01 io_write_requests: 203925 io_write: 3.11 GiB io_write_latency: 17.26 s io_misc_requests: 2449 io_misc_latency: 3.87 s

This report may be really slow if you have lots of tables. If you are only interested in disk utilization per database you can directly query the performance_schema:

SELECT extract_schema_from_file_name(fsbi.file_name) AS table_schema, SUM(fsbi.count_read) AS io_read_requests, sys.format_bytes(SUM(fsbi.sum_number_of_bytes_read)) AS io_read, sys.format_time(SUM(fsbi.sum_timer_read)) AS io_read_latency, SUM(fsbi.count_write) AS io_write_requests, sys.format_bytes(SUM(fsbi.sum_number_of_bytes_write)) AS io_write, sys.format_time(SUM(fsbi.sum_timer_write)) AS io_write_latency, SUM(fsbi.count_misc) AS io_misc_requests, sys.format_time(SUM(fsbi.sum_timer_misc)) AS io_misc_latency FROM performance_schema.file_summary_by_instance AS fsbi GROUP BY table_schema ORDER BY io_write_requests DESC limit 10; +--------------------+------------------+------------+-----------------+-------------------+------------+------------------+------------------+-----------------+ | table_schema | io_read_requests | io_read | io_read_latency | io_write_requests | io_write | io_write_latency | io_misc_requests | io_misc_latency | +--------------------+------------------+------------+-----------------+-------------------+------------+------------------+------------------+-----------------+ | binlogs | 6943921992 | 19.57 TiB | 2.46h | 1124408959 | 6.60 TiB | 2.18h | 914411 | 00:03:55.75 | | d1 | 146448 | 2.24 GiB | 1.97 s | 22768241 | 241.44 GiB | 00:12:57.97 | 5095817 | 00:12:50.16 | | stats1 | 853 | 1.81 MiB | 97.12 ms | 1042768 | 15.93 GiB | 10.27 s | 292303 | 22.74 s | | stats2 | 3024 | 11.22 MiB | 781.96 ms | 750182 | 11.45 GiB | 9.32 s | 267780 | 50.84 s | | relaylogs | 105065 | 925.71 MiB | 312.82 ms | 447607 | 925.71 MiB | 1.19 s | 72169 | 530.70 ms | | o1268 | 2078 | 3.02 MiB | 7.40 ms | 1591 | 203.97 KiB | 1.17 ms | 2693 | 92.34 ms | +--------------------+------------------+------------+-----------------+-------------------+------------+------------------+------------------+-----------------+

Conclusion

Performance_schema in MySQL 5.6 is very useful tool and can help you to get better visualization in MySQL multi-user (multi-tenant) installations. It can also do much more, i.e. find all queries with temporary tables or profile locks and mutex. “sys” schema provide you with the useful pre-defined reports, the table (view) names are almost self explanatory:

+-----------------------------------------------+ | Tables_in_sys | +-----------------------------------------------+ | innodb_buffer_stats_by_schema | | innodb_buffer_stats_by_table | ... | processlist | ... | schema_tables_with_full_table_scans | | schema_unused_indexes | | statement_analysis | | statements_with_errors_or_warnings | | statements_with_full_table_scans | | statements_with_runtimes_in_95th_percentile | | statements_with_sorting | | statements_with_temp_tables | ...

If you are using MySQL 5.6, performance_schema will give a additional valuable inside.

The post Using MySQL 5.6 Performance Schema in multi-tenant environments appeared first on MySQL Performance Blog.

Why did we develop percona-agent in Go?

Latest MySQL Performance Blog posts - June 4, 2014 - 8:12am

We recently open-sourced our percona-agent and if you check out the source code, you’ll find that it is written in the Go programming language (aka Golang). For those not up to speed, the percona-agent is a real-time client-side agent for Percona Cloud Tools.

Our requirements are quite demanding for our agents. This one is software that works on a real production server, so it must be fast, reliable, lightweight and easy to distribute. Surprisingly enough, binaries compiled by Go fit these characteristics.

There are of course alternatives that we considered. On the scripting side: Perl, Python, PHP, Ruby et al. These are not necessarily fast, and the distribution is also interesting. We have enough experience with Percona Toolkit and Perl’s “modules dependencies hell.”

On a high-end level side, there is C / C++ and I am sure we could produce an efficient agent. However we also have experience in the distribution of Percona Server / Percona XtraDB Cluster / Percona XtraBackup. Well, I have had enough with different versions of Ubuntus, Red Hats, CentOSes and the rest of the flavors of Linux.

And, yeah, there is Java, but let me just say that we are not the Java sort of developers.

So what is so charming about Go? Go feels like a scripting language, but produces executable binaries. I see it as having the ability to attack performance issues on two sides. First is the performance of software developers: They are more productive working with scripting-like languages. Second is the performance of a final product: Native self-executable binaries are more efficient than a script running through a interpreter.

It is worth noting that included batteries (a set of packages that are coming with Go) are excellent, and in many cases that will be just enough to get you going and produce software that is quite complex. And if that is not enough, there is also a list of packages and projects for Go.

Of course, there are some limitations you will find in Go (some of them are temporary I hope). These are:

1. The list of supported platforms is limited… FreeBSD (release 8 and above), Linux (ARM not supported), Mac OS X and Windows. There are no packages for Solaris yet.
2. A good MySQL driver is still a work in progress. the most advanced is Go-MySQL-Driver
3. Go comes with built-in testing capabilities, but our testing enthusiast, Daniel, says it is not enough to build a complicated testing suite.
4. There is no support of “generics” (or “templates” if you are in C++ world). Basically it means that if you developed a data structure that works with integers, you will need to copy-paste-replace to make it working with floats. Yes, there are workarounds like using a “parent-to-everything” type “interface{}”, but often it is not efficient and just looks ugly.

There is also no automatic type-conversion between int and floats, so if you need to do complex math which involves ints and floats, you may end up with a lot back-and-forth conversions, i.e. int(math.Floor(t.epsilon*float64(t.count*2)))

To finish, I would like to invite you to my webinar, “Monitoring All (Yes, All!) MySQL Metrics with Percona Cloud Tools” on Wednesday, June 25 at 10 a.m. Pacific Daylight Time, where I will talk on the new features in Percona Cloud Tools, including our new percona-agent.

The post Why did we develop percona-agent in Go? appeared first on MySQL Performance Blog.

Percona Toolkit 2.2.8 is now available

Latest MySQL Performance Blog posts - June 4, 2014 - 5:53am

Percona is glad to announce the release of Percona Toolkit 2.2.8 on June 4th, 2014 (Downloads are available here and from the Percona Software Repositories). This release is the current GA (Generally Available) stable release in the 2.2 series.

New Features:

Bugs Fixed:

All Percona software is free and open-source. Details of the release can be found in the release notes and the 2.2.8 milestone at Launchpad. Bugs can be reported on the Percona Toolkit launchpad bug tracker.

The post Percona Toolkit 2.2.8 is now available appeared first on MySQL Performance Blog.

Percona Server 5.5.37-35.1 is now available

Latest MySQL Performance Blog posts - June 3, 2014 - 7:47am

Percona Server version 5.5.37-35.1

Percona is glad to announce the release of Percona Server 5.5.37-35.1 on June 3rd, 2014 (Downloads are available here and from the Percona Software Repositories). Based on MySQL 5.5.37, including all the bug fixes in it, Percona Server 5.5.37-35.1 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.5.37-35.1 milestone at Launchpad.

Bugs Fixed:

  • InnoDB could crash if workload contained writes to compressed tables. Bug fixed #1305364.
  • GUI clients such as MySQL Workbench could not authenticate with a user defined with auth_pam_compat plugin. Bug fixed #1166938.
  • Help in Percona Server 5.5 command line client was linking to Percona Server 5.1 manual. Bug fixed #1198775.
  • Audit Log wasn’t parsing escape characters correctly in the OLD format. Bug fixed #1313696.
  • Percona Server version was reported incorrectly in Debian/Ubuntu packages. Bug fixed #1319670.

Other bugs fixed: #1272732, #1219833, #1271178, and #1314568.

Release notes for Percona Server 5.5.37-35.1 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.37-35.1 is now available appeared first on MySQL Performance Blog.

Do not trust vmstat IOwait numbers

Latest MySQL Performance Blog posts - June 3, 2014 - 3:00am

I’ve been running a benchmark today on my old test box with conventional hard drives (no raid with BBU) and noticed something unusual in the CPU utilization statistics being reported.

The benchmark was run like this:

sysbench --num-threads=64 --max-requests=0 --max-time=600000 --report-interval=10 --test=oltp --db-driver=mysql --oltp-dist-type=special --oltp-table-size=1000000 --mysql-user=root --mysql-password=password run

Which means: create 64 threads and hammer the database with queries as quickly as possible. As the test was run on the localhost I would expect the benchmark to completely saturate the system – being either using CPU or being blocked on IO nature of this benchmark so it does not spend a lot on database locks, especially as this system has just 2 cores.

Looking at VMSTAT however I noticed this:

[root@smt1 mysql]# vmstat 10 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 13 0 0 880156 40060 2140824 0 0 12 750 895 1045 32 8 54 6 0 0 0 0 877116 40060 2141312 0 0 0 1783 2185 23112 44 10 41 5 0 15 0 0 872648 40068 2141960 0 0 0 1747 2204 25743 41 11 46 2 0 0 0 0 868056 40068 2142604 0 0 0 1803 2164 26224 40 11 44 5 0 17 1 0 863216 40068 2143160 0 0 0 1875 1948 23020 36 9 50 5 0 0 0 0 858384 40168 2143656 0 0 0 1063 1855 21116 32 9 45 14 0 23 0 0 855848 40176 2144232 0 0 0 1755 2036 23181 36 10 48 6 0 49 0 0 851248 40184 2144648 0 0 0 1679 2313 22832 45 10 40 5 0 10 0 0 846292 40192 2145248 0 0 0 1911 1980 23185 36 9 50 4 0 0 0 0 844260 40196 2145868 0 0 0 1757 2152 26387 39 11 45 5 0 0 3 0 839296 40196 2146520 0 0 0 1439 2104 25096 38 10 50 1 0

Looking at last few columns we have IO wait typically being in single digit percents where the “idle” CPU floats in 40-50% range. You can also see it visually in the graph from Percona Cloud Tools

So the system should be completely saturated with the load but instead we have 50% of CPU idle and iowait is typically in single digits?

Running SHOW PROCESSLIST can give us some good idea in this case:

mysql> show processlist; +-----+---------------+-----------------+--------+---------+------+----------------+--------------------------------------------------------------------+-----------+---------------+-----------+ | Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | Rows_read | +-----+---------------+-----------------+--------+---------+------+----------------+--------------------------------------------------------------------+-----------+---------------+-----------+ | 1 | root | localhost:39624 | NULL | Sleep | 0 | | NULL | 0 | 0 | 0 | | ... | 32 | root | localhost | sbtest | Execute | 0 | NULL | COMMIT | 0 | 0 | 0 | | 33 | root | localhost | sbtest | Execute | 0 | NULL | COMMIT | 0 | 0 | 0 | | 34 | root | localhost | sbtest | Execute | 0 | Sorting result | SELECT c from sbtest where id between 365260 and 365359 order by c | 0 | 0 | 0 | | 35 | root | localhost | sbtest | Execute | 0 | NULL | COMMIT | 0 | 0 | 0 | | 36 | root | localhost | sbtest | Execute | 0 | NULL | COMMIT | 0 | 0 | 0 | | 37 | root | localhost | sbtest | Execute | 0 | NULL | COMMIT | 0 | 0 | 0 | | 38 | root | localhost | sbtest | Execute | 0 | Writing to net | DELETE from sbtest where id=496460 | 0 | 1 | 1 | | 39 | root | localhost | sbtest | Execute | 0 | NULL | COMMIT | 0 | 0 | 0 | ... | 89 | root | localhost | sbtest | Execute | 0 | NULL | COMMIT | 0 | 0 | 0 | | 90 | root | localhost | sbtest | Execute | 0 | NULL | COMMIT | 0 | 0 | 0 | | 91 | root | localhost | sbtest | Execute | 0 | NULL | COMMIT | 0 | 0 | 0 | | 268 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0 | 0 | 0 | +-----+---------------+-----------------+--------+---------+------+----------------+--------------------------------------------------------------------+-----------+---------------+-----------+ 68 rows in set (0.00 sec)

So most of the queries are stalling in COMMIT stage. Checking setting we can see:

mysql> select @@innodb_flush_log_at_trx_commit; +----------------------------------+ | @@innodb_flush_log_at_trx_commit | +----------------------------------+ | 1 | +----------------------------------+ 1 row in set (0.00 sec)

Which means Percona Server is running in the fully ACID mode, which with slow hard drives without BBU means commit is causing syncing data to the disk which is very expensive.

So we figured out what is really happening… yet if we’re really waiting on disk a lot why iowait does not show that?
To answer that lets see how IOwait is really computed.

IOwait is type of Idle time (and in earlier Linux versions it wasincluded in Idle) – when we have the process/thread which would be runable if it would not be waiting on IO we consider this iowait, if it is not runable because of something else (waiting on mutex, network etc) it is considered idle.

Writes to Innodb log file are serialized – so only one process will be doing log write (and sync) while all others waiting to be able to commit will be waiting on some form of synchronization mechanism to be notified when the commit is complete as part of group commit.

What is the take away? Take iowait% numbers from vmstat/iostat with grain of salt. It is possible there is much more to gain than faster IO subsystem (or proper configuration) than iowait numbers show. Consider the iowait%+idle% of the CPU as “cpu available” which could be potentially put to the good use with improving IO subsystem and minimizing contention

To illustrate that lets take a look at the CPU graph again – now as I’ve set innodb_flush_log_at_trx_commit=2

As we can see from the graph we have reclaimed much more than 6-7% of idle time we had – the user CPU usage increased to 70% and idle+iowait is now 10-15%; the throughput of the system has increased about 60% – much more than “eliminating iowait” would allow.

The post Do not trust vmstat IOwait numbers appeared first on MySQL Performance Blog.

Using InfiniDB MySQL server with Hadoop cluster for data analytics

Latest MySQL Performance Blog posts - June 2, 2014 - 9:58am

In my previous post about Hadoop and Impala I benchmarked performance of analytical queries in Impala.

This time I’ve tried InfiniDB for Hadoop (open-source version) on the modern hardware with an 8-node Hadoop cluster. One of the main advantages (at least for me) of InifiniDB for Hadoop is that it stores the data inside the Hadoop cluster but uses the MySQL server to execute queries. This allows for an easy “migration” of existing analytical tools. The results are quite interesting and promising.

Quick How-To

The InfiniDB documentation is not very clear on step-by-step instructions so I’ve created this quick guide:

  1. Install Hadoop cluster (minimum install will work). I’ve used Cloudera Manager (CDH5) to compare the speed of InfiniDB to Cloudera Impala. Install the tools in the “Pre-requirements” sections of InfiniDB for Hadoop Manual
  2. Install the InfiniDB for Hadoop binaries on 1 Hadoop node (you can choose any node).  This will install InfiniDB and its version of MySQL (based on MySQL 5.1).
  3. After installation it will tell you the variables to set and run the postConfigure script. Example:
    export JAVA_HOME=/usr/java/jdk1.6.0_31 export LD_LIBRARY_PATH=/usr/java/jdk1.6.0_31/jre/lib/amd64/server . /root/setenv-hdfs-20 /usr/local/Calpont/bin/postConfigure
  4. The postConfigure script will ask the questions. Couple imfortant notes:
  • Make sure to use HDFS as a “type of Data Storage”.
  • The performance module 1 (pm1) should point to the host (hostname and IP) you are running the postConfigure script on. Other pm(s) should point to other Hadoop nodes

When installation is finished you will be able to login into MySQL server, it uses script called ibdmysql which will call mysql cli with the correct socket and port. Check that the infiniDB is enabled by running “show engines”, InfiniDB should be in the list.

The next step will be importing data.

Data import

First we will need to create a MySQL table with “engine=InfiniDB”:

CREATE TABLE `ontime` ( `YearD` int(11) NOT NULL, `Quarter` tinyint(4) DEFAULT NULL, `MonthD` tinyint(4) DEFAULT NULL, `DayofMonth` tinyint(4) DEFAULT NULL, `DayOfWeek` tinyint(4) DEFAULT NULL, `FlightDate` date DEFAULT NULL, ... ) ENGINE=InfiniDB DEFAULT CHARSET=latin1

Second,  I’ve used the cpimport to load the data. It turned out it is much more efficient and easier to load 1 big file rather than 20×12 smaller files (original “ontime” data is 1 file per month), so I’ve exported the “Ontime” data from MySQL table and created 1 big file “ontime.psv”.

I used the following command to export data into InfiniDB:

[root@n0 ontime]# /usr/local/Calpont/bin/cpimport -s '|' ontime ontime ontime.psv 2014-05-20 15:12:58 (18787) INFO : Running distributed import (mode 1) on all PMs... 2014-05-20 15:25:28 (18787) INFO : For table ontime.ontime: 155083620 rows processed and 155083620 rows inserted. 2014-05-20 15:25:28 (18787) INFO : Bulk load completed, total run time : 751.561 seconds

The data is stored in Hadoop:

[root@n0 ontime]# hdfs dfs -du -h /usr/local/Calpont 1.4 G /usr/local/Calpont/data1 1.4 G /usr/local/Calpont/data2 1.4 G /usr/local/Calpont/data3 1.4 G /usr/local/Calpont/data4 1.4 G /usr/local/Calpont/data5 1.4 G /usr/local/Calpont/data6 1.4 G /usr/local/Calpont/data7 1.4 G /usr/local/Calpont/data8

The total size of the data is 8×1.4G = 11.2G (compressed). To compare the size of the same dataset in Impala Parquet format is 3.6G. Original size was ~60G.

[root@n0 ontime]# hdfs dfs -du -h /user/hive/warehouse 3.6 G /user/hive/warehouse/ontime_parquet_snappy

Now we can run the 2 queries I’ve tested before:

1. Simple group-by

mysql> select yeard, count(*) from ontime group by yeard order by yeard; +-------+----------+ | yeard | count(*) | +-------+----------+ | 1988 | 5202096 | | 1989 | 5041200 | | 1990 | 5270893 | | 1991 | 5076925 | | 1992 | 5092157 | | 1993 | 5070501 | | 1994 | 5180048 | | 1995 | 5327435 | | 1996 | 5351983 | | 1997 | 5411843 | | 1998 | 5384721 | | 1999 | 5527884 | | 2000 | 5683047 | | 2001 | 5967780 | | 2002 | 5271359 | | 2003 | 6488540 | | 2004 | 7129270 | | 2005 | 7140596 | | 2006 | 7141922 | | 2007 | 7455458 | | 2008 | 7009726 | | 2009 | 6450285 | | 2010 | 6450117 | | 2011 | 6085281 | | 2012 | 6096762 | | 2013 | 6369482 | | 2014 | 1406309 | +-------+----------+ 27 rows in set (0.22 sec)

2. The complex query from my original post:

mysql> select min(yeard), max(yeard), Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and flightdate < '2010-01-01' GROUP by carrier HAVING cnt > 100000 and max(yeard) > 1990 ORDER by rate DESC, cnt desc LIMIT 1000; +------------+------------+---------+----------+-----------------+------+ | min(yeard) | max(yeard) | Carrier | cnt | flights_delayed | rate | +------------+------------+---------+----------+-----------------+------+ | 2003 | 2009 | EV | 1454777 | 237698 | 0.16 | | 2003 | 2009 | FL | 1082489 | 158748 | 0.15 | | 2006 | 2009 | YV | 740608 | 110389 | 0.15 | | 2006 | 2009 | XE | 1016010 | 152431 | 0.15 | | 2003 | 2009 | B6 | 683874 | 103677 | 0.15 | | 2001 | 2009 | MQ | 3238137 | 448037 | 0.14 | | 2003 | 2005 | DH | 501056 | 69833 | 0.14 | | 2004 | 2009 | OH | 1195868 | 160071 | 0.13 | | 2003 | 2006 | RU | 1007248 | 126733 | 0.13 | | 1988 | 2009 | UA | 9593284 | 1197053 | 0.12 | | 2003 | 2006 | TZ | 136735 | 16496 | 0.12 | | 1988 | 2001 | TW | 2656286 | 280283 | 0.11 | | 1988 | 2009 | AA | 10568437 | 1183786 | 0.11 | | 1988 | 2009 | CO | 6023831 | 673354 | 0.11 | | 1988 | 2009 | DL | 11866515 | 1156048 | 0.10 | | 2003 | 2009 | OO | 2654259 | 257069 | 0.10 | | 1988 | 2009 | AS | 1506003 | 146920 | 0.10 | | 2007 | 2009 | 9E | 577244 | 59440 | 0.10 | | 1988 | 2009 | US | 10276862 | 990995 | 0.10 | | 1988 | 2009 | NW | 7601727 | 725460 | 0.10 | | 1988 | 2005 | HP | 2607603 | 235675 | 0.09 | | 1988 | 2009 | WN | 12722174 | 1107840 | 0.09 | | 2005 | 2009 | F9 | 307569 | 28679 | 0.09 | | 1988 | 1991 | PA | 203401 | 19263 | 0.09 | +------------+------------+---------+----------+-----------------+------+ 24 rows in set (0.86 sec)

The same query in impala (on the same hardware) runs for 7.18 seconds:

[n8.local:21000] > select min(yeard), max(yeard), Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_parquet_snappy WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and flightdate < '2010-01-01' GROUP by carrier HAVING cnt > 100000 and max(yeard) > 1990 ORDER by rate DESC LIMIT 1000; Query: select min(yeard), max(yeard), Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_parquet_snappy WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and flightdate < '2010-01-01' GROUP by carrier HAVING cnt > 100000 and max(yeard) > 1990 ORDER by rate DESC LIMIT 1000 +------------+------------+---------+----------+-----------------+------+ | min(yeard) | max(yeard) | carrier | cnt | flights_delayed | rate | +------------+------------+---------+----------+-----------------+------+ | 2003 | 2009 | EV | 1454777 | 237698 | 0.16 | | 2003 | 2009 | FL | 1082489 | 158748 | 0.15 | | 2006 | 2009 | XE | 1016010 | 152431 | 0.15 | | 2006 | 2009 | YV | 740608 | 110389 | 0.15 | | 2003 | 2009 | B6 | 683874 | 103677 | 0.15 | | 2001 | 2009 | MQ | 3238137 | 448037 | 0.14 | | 2003 | 2005 | DH | 501056 | 69833 | 0.14 | | 2004 | 2009 | OH | 1195868 | 160071 | 0.13 | | 2003 | 2006 | RU | 1007248 | 126733 | 0.13 | | 1988 | 2009 | UA | 9593284 | 1197053 | 0.12 | | 2003 | 2006 | TZ | 136735 | 16496 | 0.12 | | 1988 | 2001 | TW | 2656286 | 280283 | 0.11 | | 1988 | 2009 | CO | 6023831 | 673354 | 0.11 | | 1988 | 2009 | AA | 10568437 | 1183786 | 0.11 | | 1988 | 2009 | US | 10276862 | 990995 | 0.10 | | 2007 | 2009 | 9E | 577244 | 59440 | 0.10 | | 1988 | 2009 | DL | 11866515 | 1156048 | 0.10 | | 2003 | 2009 | OO | 2654259 | 257069 | 0.10 | | 1988 | 2009 | NW | 7601727 | 725460 | 0.10 | | 1988 | 2009 | AS | 1506003 | 146920 | 0.10 | | 1988 | 1991 | PA | 203401 | 19263 | 0.09 | | 1988 | 2009 | WN | 12722174 | 1107840 | 0.09 | | 1988 | 2005 | HP | 2607603 | 235675 | 0.09 | | 2005 | 2009 | F9 | 307569 | 28679 | 0.09 | +------------+------------+---------+----------+-----------------+------+ Returned 24 row(s) in 7.18s

Conclusion and charts

To summaries I’ve created the following charts:

Simple query:

As we can see InfiniDB looks pretty good here. It also uses MySQL protocol, so existing application which uses MySQL will be able to work here without any additional “connectors”.

One note regarding my query example: the “complex” query is designed in a way that will make it hard to use any particular set of index; this query will have to scan the >70% of the table to generate the resultset. That is why it is so slow in MySQL compared to columnar databases. Another “issue” is that the table is very wide and most of the columns are declared as varchar (table is not normalized), which makes it large in MySQL. All this will make it ideal for columnar storage and compression. Other cases may not show that huge of a difference.

So far I was testing with small data (60G), I will plan to run big data benchmark next.

The post Using InfiniDB MySQL server with Hadoop cluster for data analytics appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
Contact Us 24 Hours A Day
Support Contact us 24×7
Emergency? Contact us for help now!
Sales North America (888) 316-9775 or
(208) 473-2904
Sales
Europe
+44-208-133-0309 (UK)
0-800-051-8984 (UK Toll Free)
0-800-181-0665 (GER Toll Free)
More Numbers
Training (855) 55TRAIN or
(925) 271-5054

 

Share This
]]>