]]>
]]>

Feed aggregator

You are here

ScaleArc: Real-world application testing with WordPress (benchmark test)

Latest MySQL Performance Blog posts - April 29, 2014 - 6:00am

ScaleArc recently hired Percona to perform various tests on its database traffic management product. This post is the outcome of the benchmarks carried out by me and ScaleArc co-founder and chief architect, Uday Sawant.

The goal of this benchmark was to identify ScaleArc’s overhead using a real-world application – the world’s most popular (according to wikipedia) content management system and blog engine: WordPress.

The tests also sought to identify the benefit of caching for this type of workload. The caching parameters represent more real-life circumstances than we applied in the sysbench performance tests – the goal here was not just to saturate the cache. For this reason, we created an artificial WordPress blog with generated data.

The size of the database was roughly 4G. For this particular test, we saw that using ScaleArc introduces very little overhead and caching increased the throughput 3.5 times at peak capacity. In terms of response times, response times on queries for which we had a cache hit decreased substantially. For example, a 5-second main page load became less than 1 second when we had cache hits on certain queries. It’s a bit hard to talk about response time here in general, because WordPress itself has different requests that are associated with different costs (computationally) and which have different response times.

Test description

The pre-generated test database contained the following:

  • 100 users
  • 25 categories
  • 100.000 posts (stories)
  • 300.000 comments (3 per post)

One iteration of the load contained the following:

  • Homepage retrieval
  • 10 story (post) page retrieval
  • 3 category page retrieval
  • Log in as a random user
  • That random user posted a new story and commented on an existing post

We think that the usage pattern is close to reality – most people just visit blogs, but some write posts and comments. For the test, we used WordPress version 3.8.1. We wrote a simple shell script that could do these iterations using multiple processes. Some of this testing pattern, however, is not realistic. Some posts will always have many more comments than others, and some posts won’t have any comments at all. This test doesn’t take that nuance into account, but that doesn’t change the big picture. Choosing a random post to comment on will give us a uniform comment distribution.

We measured 3 scenarios:

  • Direct connection to the database (direct_wp).
  • Connection through ScaleArc without caching.
  • Connection through ScaleArc with caching enabled.

When caching is enabled, queries belonging to comments were cached for 5 minutes, queries belonging to the home page were cached for 15 minutes, and queries belonging to stories (posts) were cached for 30 minutes.

We varied the number of parallel iterations. Each test ran for an hour.

Results for direct database connection

Threads: 1, Iterations: 180, Time[sec]: 3605 Threads: 2, Iterations: 356, Time[sec]: 3616 Threads: 4, Iterations: 780, Time[sec]: 3618 Threads: 8, Iterations: 1408, Time[sec]: 3614 Threads: 16, Iterations: 2144, Time[sec]: 3619 Threads: 32, Iterations: 2432, Time[sec]: 3646 Threads: 64, Iterations: 2368, Time[sec]: 3635 Threads: 128, Iterations: 2432, Time[sec]: 3722

The result above is the summary output of the script we used. The data shows we reach peak capacity at 32 concurrent threads.

Results for connecting through ScaleArc

Threads: 1, Iterations: 171, Time[sec]: 3604 Threads: 2, Iterations: 342, Time[sec]: 3606 Threads: 4, Iterations: 740, Time[sec]: 3619 Threads: 8, Iterations: 1304, Time[sec]: 3609 Threads: 16, Iterations: 2048, Time[sec]: 3625 Threads: 32, Iterations: 2336, Time[sec]: 3638 Threads: 64, Iterations: 2304, Time[sec]: 3678 Threads: 128, Iterations: 2304, Time[sec]: 3675

The results are almost identical. Because a typical query in this example is quite expensive, the overhead of ScaleArc here is barely measurable.

Results for connecting through ScaleArc with caching enabled

Threads: 1, Iterations: 437, Time[sec]: 3601 Threads: 2, Iterations: 886, Time[sec]: 3604 Threads: 4, Iterations: 1788, Time[sec]: 3605 Threads: 8, Iterations: 3336, Time[sec]: 3600 Threads: 16, Iterations: 6880, Time[sec]: 3606 Threads: 32, Iterations: 8832, Time[sec]: 3600 Threads: 64, Iterations: 9024, Time[sec]: 3614 Threads: 128, Iterations: 8576, Time[sec]: 3630

Caching improved response time even for a single thread. At 32 threads, we see more than 3.5x improvement in throughput. Caching is a great help here for the same reason the overhead is barely measurable: the queries are more expensive in general, so more resources are spared when they are not run.

Throughput
From the web server’s access log, we created a per-second throughput graph. We are talking about requests per second here. Please note that the variance is relatively high, because the requests are not identical – retrieving the main page is a different request and has a different cost then retrieving a story page.

The red and blue dots are literally plotted on top of each other – the green bar is always on top of them. The green ones have a greater variance because even though we had caching enabled during the test, we used more realistic TTLs in this cache, so cached items did actually expire during the test. When the cache was expired, requests took longer, so the throughput was lower. When the cache was populated, requests took a shorter amount of time, so the throughput was higher.

CPU utilization

CPU utilization characteristics are pretty much the same on the left and right sides (direct connection on the left and ScaleArc without caching on the right). In the middle, we can see that the web server’s CPU gets completely utilized sooner with caching. Because data comes faster from the cache, it serves more requests, which costs more resources computationally. On the other hand, the database server’s CPU utilization is significantly lower when caching is used. The bar is on the top on the left and right sides – in the middle, we have bars both at the top and at the bottom. The test is utilizing the database server’s CPU completely only when we hit cache misses.

Because ScaleArc serves the cache hits, and these requests are not hitting the database, the database is not used at all when requests are served from the cache. In the case of tests with caching on, the bottleneck became the web server, which is a component that is a lot easier to scale than the database.

There are two more key points to take away here. First, regardless of whether caching is turned on or off, this workload is not too much for ScaleArc. Second, the client we ran the measurement scripts on was not the bottleneck.

Conclusion
The goal of these benchmarks was to show that ScaleArc has very little overhead and that caching can be beneficial for a real-world application, which has a “read mostly” workload with relatively expensive reads (expensive means that network round trip is not a significant contributor in the read’s response time). A blog is exactly that type – typically, more people are visiting than commenting. The test showed that ScaleArc is capable of supporting this scenario well, delivering 3.5x throughput at peak capacity. It’s worth mentioning that if this system needs to be scaled, more web servers could be added as well as more read slaves. Those read slaves can take up read queries by a WordPress plugin which allows this, or by ScaleArc’s read-write splitting facility (it threats autocommit selects as reads), in the later case, the caching benefit is present for the slaves as well.

The post ScaleArc: Real-world application testing with WordPress (benchmark test) appeared first on MySQL Performance Blog.

OOM relation to vm.swappiness=0 in new kernel

Latest MySQL Performance Blog posts - April 28, 2014 - 7:52am

I have recently been involved in diagnosing the reasons behind OOM invocation that would kill the MySQL server process. Of course these servers were primarily running MySQL. As such the MySQL server process was the one with the largest amount of memory allocated.

But the strange thing was that in all the cases, there was no swapping activity seen and there were enough pages in the page cache. Ironically all of these servers were CentOS 6.4 running kernel version 2.6.32-358. Another commonality was the fact that vm.swappiness was set to 0. This is a pretty much standard practice and one that is applied on nearly every server that runs MySQL.

Looking into this further I realized that there was a change introduced in kernel 3.5-rc1 that altered the swapping behavior when “vm.swappiness=0″.

Below is the description of the commit that changed “vm.swappiness=0″ behavior, together with the diff:

$ git show fe35004fbf9eaf67482b074a2e032abb9c89b1dd commit fe35004fbf9eaf67482b074a2e032abb9c89b1dd Author: Satoru Moriya <satoru.moriya@hds.com> Date: Tue May 29 15:06:47 2012 -0700 mm: avoid swapping out with swappiness==0 Sometimes we'd like to avoid swapping out anonymous memory. In particular, avoid swapping out pages of important process or process groups while there is a reasonable amount of pagecache on RAM so that we can satisfy our customers' requirements. OTOH, we can control how aggressive the kernel will swap memory pages with /proc/sys/vm/swappiness for global and /sys/fs/cgroup/memory/memory.swappiness for each memcg. But with current reclaim implementation, the kernel may swap out even if we set swappiness=0 and there is pagecache in RAM. This patch changes the behavior with swappiness==0. If we set swappiness==0, the kernel does not swap out completely (for global reclaim until the amount of free pages and filebacked pages in a zone has been reduced to something very very small (nr_free + nr_filebacked < high watermark)). Signed-off-by: Satoru Moriya <satoru.moriya@hds.com> Acked-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Rik van Riel <riel@redhat.com> Acked-by: Jerome Marchand <jmarchan@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> diff --git a/mm/vmscan.c b/mm/vmscan.c index 67a4fd4..ee97530 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1761,10 +1761,10 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc, * proportional to the fraction of recently scanned pages on * each list that were recently referenced and in active use. */ - ap = (anon_prio + 1) * (reclaim_stat->recent_scanned[0] + 1); + ap = anon_prio * (reclaim_stat->recent_scanned[0] + 1); ap /= reclaim_stat->recent_rotated[0] + 1; - fp = (file_prio + 1) * (reclaim_stat->recent_scanned[1] + 1); + fp = file_prio * (reclaim_stat->recent_scanned[1] + 1); fp /= reclaim_stat->recent_rotated[1] + 1; spin_unlock_irq(&mz->zone->lru_lock); @@ -1777,7 +1777,7 @@ out: unsigned long scan; scan = zone_nr_lru_pages(mz, lru); - if (priority || noswap) { + if (priority || noswap || !vmscan_swappiness(mz, sc)) { scan >>= priority; if (!scan && force_scan) scan = SWAP_CLUSTER_MAX;

This change was merged into the RHEL kernel 2.6.32-303:

* Mon Aug 27 2012 Jarod Wilson <jarod@redhat.com> [2.6.32-303.el6] ... - [mm] avoid swapping out with swappiness==0 (Satoru Moriya) [787885]

This obviously changed the way we think about “vm.swappiness=0″. Previously, setting this to 0 was thought to reduce the tendency to swap userland processes but not disable that completely. As such it was expected to see little swapping instead of OOM.

This applies to all RHEL/CentOS kernels > 2.6.32-303 and to other distributions that provide newer kernels such as Debian and Ubuntu. Or any other distribution where this change has been backported as in RHEL.

Let me share with you memory zones related statistics that were logged to the system log from one of the OOM event.

Mar 11 11:01:45 db01 kernel: Node 0 DMA free:15680kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15284kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Mar 11 11:01:45 db01 kernel: Node 0 DMA32 free:45448kB min:25140kB low:31424kB high:37708kB active_anon:1741812kB inactive_anon:520348kB active_file:4792kB inactive_file:462576kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3072160kB mlocked:0kB dirty:386328kB writeback:76268kB mapped:936kB shmem:0kB slab_reclaimable:20420kB slab_unreclaimable:6964kB kernel_stack:0kB pagetables:572kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:142592 all_unreclaimable? no Mar 11 11:01:45 db01 kernel: Node 0 Normal free:42436kB min:42316kB low:52892kB high:63472kB active_anon:3041852kB inactive_anon:643624kB active_file:340156kB inactive_file:1003512kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:5171200kB mlocked:0kB dirty:979444kB writeback:22040kB mapped:15616kB shmem:180kB slab_reclaimable:41052kB slab_unreclaimable:35996kB kernel_stack:2720kB pagetables:19912kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:31552 all_unreclaimable? no

As can be seen the amount of free memory and the amount of memory in the page cache was greater than the high watermark, which prevented any swapping activity. Yet unnecessary memory pressure caused OOM to be invoked which killed the MySQL server process.

MySQL getting OOM’ed is bad for many reasons and can have an undesirable impact such as causing loss of uncommitted transactions or transactions not yet flushed to the log because of innodb_flush_log_at_trx_commit=0, or a much more heavy impact because of cold caches upon restart.

I prefer the old behavior of vm.swappiness and as such I now set it to a value of “1″. Setting vm.swappiness=0 would mean that you will now have to be much more accurate in how you configure the size of various global and session buffers.

The post OOM relation to vm.swappiness=0 in new kernel appeared first on MySQL Performance Blog.

Managing farms of MySQL servers with MySQL Fabric

Latest MySQL Performance Blog posts - April 25, 2014 - 12:00am

While built-in replication has been a major cause for MySQL’s wide adoption, official tools to help DBAs manage replication topologies have typically been missing from the picture. The community has produced many good products to fill in this gap, but recently, Oracle has been filling it too with the addition of MySQL Utilities to the mix.

One part of the Utilities that has been generating interest recently is MySQL Fabric, and we will be discussing this project in an upcoming series of blog posts.

According to the official documentation, MySQL Fabric is a system for managing farms of MySQL Servers. At the current stage, the system focuses on two areas of MySQL operations: High Availability and Sharding, relying on GTID based replication (available only on MySQL >= 5.6.5) for the former. Its development has been spearheaded by Mats Kindahl, senior principal software developer in MySQL at Oracle, who explains what it is in this post and again in this short video interview at Percona Live earlier this month.

We will start this series of posts by providing a broad overview of the project, and then we’ll dig deeper on each area on subsequent posts.

What it does

Currently, MySQL Fabric has two areas of server management in which it can help DBAs: High Availability and Sharding.

In terms of High Availability, MySQL Fabric will let you pool a group of MySQL servers and treat them as a single logical unit, with a Primary server that can take reads and writes, and Secondary servers that can take reads (and be used to scale those) as well as take over the Primary role in the event of a failure.

As we’ll see, MySQL Fabric relies on GTID based replication to work. By default, the tool won’t do any automatic failover, but it can be configured to do that, and it does work. In future posts of this series, we’ll spend some time explaining how to set up an HA group of MySQL servers using MySQL Fabric, and then we’ll set to try and break them in many ways. Stay tuned if you’re interested in the results

When it comes to Sharding, MySQL Fabric relies on special connectors to work. Without the connectors, you can still use the mysqlfabric command line tool to manage shards (and even migrate them from one server to another), but you will have to modify your application to include logic to decide which shard should be used for any given query. When using the connectors, the MySQL connection will actually be established with MySQL Fabric (with an XML RPC service we’ll talk about later). The connectors cache the needed information (this is a simplification, we’ll go into more detail on the next post) so that the client can make the right routing decision when reading or writing data to a sharded table.

We feel that the HA features are more mature now, but the Sharding ones look promising. Also, MySQL Fabric is extensible, and we think this is one area where it can grow a lot (for example, by using Shard Query with MySQL Fabric to support parallel distributed queries).

General Concepts

In order to understand MySQL Fabric we first need to present some terminology used by the project. We’ll start by listing basic definitions, and then we’ll go into more details when needed.

  • Group. A collection of mysqld servers.
  • Global group. Special groups that store updates that must be propagated to all shards.
  • Node. A running instance of MySQL Fabric.
  • Shard. A horizontal partition of data in a table.
  • Primary. A group member that has been designated master.
  • Secondary. A group member that is read only.
Group

A server here is actually a mysqld instance, though ideally, all instances of a group should be on different servers. However, while testing, you can create multiple instances on the same host since you don’t really need HA.

A given server can only be part of a single group. This may seem confusing at first, but when you realize MySQL Fabric relies on replication (using GTID) for most of its work, it becomes clearer. A given MySQL server can only have one master and therefore it makes no sense for it to belong to multiple groups.

Groups have identifiers, which are just symbolic names that need to comply with some basic rules

Global group

Global groups are special groups involved in Sharding. They are used to propagate changes global to all shards (like schema changes, though not only that).

Node

Note that a node is *not* a MySQL server that’s part of a group. It is a python program that, among other things, provides the XML-RPC server that is used by special connectors and by the ‘mysqlfabric’ command line client. A node will, however, need a mysqld instance. This instance is called the backend store and will be used by MySQL Fabric to save all the information it needs to manage servers.

Shard

We said a shard is an horizontal partition of data in a table, but this partition happens at the MySQL Fabric level. MySQL is completely unaware of it, and at the mysqld instance level, a shard is just a table. We’ll talk more about this soon, as it has some consequences.

Primary

The primary server is the only writable server in a group. This applies to HA, not to sharding, though you could define a group (and therefore a Primary) per shard and therefore use MySQL Fabric both for sharding, and to provide HA for each shard.

Secondary

A Secondary server is a member of a group that is available to replace a Primary server on failover, and that is read only.It can also be used to scale out reads. 

Highlights 

 As with anything, MySQL Fabric has its good and bad points. Fortunately, the bulk of the bad points we identified are due to the project being still early in its lifecycle. Considering the latest release is a RC, we’re sure those will go away the future.

On the good side:

  • It is developed by Oracle.

We think this is important, because MySQL did not have a standard tool to manage farms of servers until now. People will still be able to use tools from other providers (or cook their own), but we think it’s good to have a standard offering that’s part of the MySQL packages.

  • It is extensible.

MySQL Fabric feels more like a framework than a closed product. In fact, even some official presentations introduce it as a framework. As a framework, it is implemented in python, a widely available and friendly interpreted language. We believe this means MySQL Fabric should be adaptable to specific needs with little hassle.

  • It is focused on MySQL versions 5.6.10 and newer.

By not worrying about backwards compatibility, implementation should be simpler. Take Secondary server promotion after the Primary goes down as example. GTID makes this much simpler.

On the bad side:

  • It is a bit rough around the edges.

This is expected for a new product, and to be honest, most problems we faced turned out being a documentation issue and not an actual problem with the software. The docs are still a bit green, but source code comments are good and plenty, so if you want to get your hands dirty and really understand how it works, that is the path we suggest.

  • Some things require help from MySQL to be properly implemented, and MySQL does not provide it (yet?).

An example: it is currently impossible to make sure that writes can never go to the wrong shard. As we said earlier, at the individual mysqld server, a shard is just a table, period. Ideally, some future MySQL version should extend the @read_only variable functionality and allow you to selectively mark parts of the data as read only. If we could mark a specific table as read only, or as updatable only when a given condition is met (i.e. WHERE id between <lower bound> and <upper bound>), it would increase the safety of sharding a lot.

  • It is focused on MySQL versions 5.6.10.

Yes, we said that’s good, but the obvious downside is you can’t consider using MySQL Fabric if you’re on an earlier MySQL version, and a lot of people still are.

  • In HA setups, MySQL Fabric itself can become a single point of failure

MySQL Fabric (the XML-RPC ‘Node’) and its data store (the mysqld instance that stores MySQL Fabric’s data) are a single point of failure that needs to be addressed. In practical terms, the impact of MySQL Fabric going down will vary with your use case. If you’re only using the mysqlfabric utility to manage servers, nothing will happen as long as all servers in a Group continue to work. If, however, you’re using one of the special connectors to access the Group, then your application will be down. This is resolvable, and we will discuss some approaches in the HA posts, but we think the best solution going forward is for MySQL Fabric to address this in the future by letting you set up multiple Fabric Nodes and have them monitor each other and promote a new active one if needed.

What comes next

Over the next few days, we will blog more about this, describing how to set up HA and Sharded clusters using MySQL Fabric, walking you through the process, and also trying to see how and when it can fail. We had a lot of fun doing this and we hope you have a good time reading it and experimenting yourself too!

The post Managing farms of MySQL servers with MySQL Fabric appeared first on MySQL Performance Blog.

Encrypted and incremental MySQL backups with Percona XtraBackup

Latest MySQL Performance Blog posts - April 24, 2014 - 12:00am

We’ve recently received a number of questions on how to implement incremental MySQL backups alongside encryption with Percona XtraBackup. Some users thought it was not initially possible because with the default--encryptoptions with XtraBackup, all files will be encrypted, but alas, that is not the case. This is where the option--extra-lsn-dirbecomes useful, because it allows you to save LSN (Log Sequence Number) information to another directory and exclude it from encryption, allowing you to use the same information needed by incremental backups. Enough talk, let me show you.

Because you would want to usually script your backup and restore procedure, I’d use variables here as well to make you more familiar. First, I’d create 3 folders, where my backups will be stored, ‘full’ for full backups, ‘incr’ for incremental backups, and ‘lsns’ to store an extra copy of myxtrabackup_checkpointsfile with--extra-lsn-dir.

mkdir -p /ssd/msb/msb_5_5_360/bkp/full mkdir -p /ssd/msb/msb_5_5_360/bkp/incr mkdir -p /ssd/msb/msb_5_5_360/bkp/lsns

Second, to have better control of where my backups would go, I prefer assigning timestamped folders instead and use the –no-timestamp option to innobackupex.

CURDATE=$(date +%Y-%m-%d_%H_%M_%S)

Then manually create the specific directory where the backup’s xtrabackup_checkpoints file would be saved:

mkdir -p /ssd/msb/msb_5_5_360/bkp/lsns/$CURDATE

Of course, I need an encryption key for my encrypted backups, in this case, taking the example from the manual, I used openssl to generate a random key. You can use your own key string as long as its size conforms to the size required by the--encryptalgorithm you chose.

echo -n $( openssl enc -aes-256-cbc -pass pass:Password -P -md sha1 \ | grep iv | cut -d'=' -f2 ) > /ssd/msb/msb_5_5_360/bkp/backups.key

Next, I would run my full backup:

innobackupex --defaults-file=/ssd/msb/msb_5_5_360/my.sandbox.cnf \ --extra-lsndir=/ssd/msb/msb_5_5_360/bkp/lsns/$CURDATE \ --encrypt=AES256 --encrypt-key-file=/ssd/msb/msb_5_5_360/bkp/backups.key \ --no-timestamp /ssd/msb/msb_5_5_360/bkp/full/$CURDATE

The output says my full backup is saved to:

innobackupex: Backup created in directory '/ssd/msb/msb_5_5_360/bkp/full/2014-04-23_01_20_46' 140423 01:20:55 innobackupex: Connection to database server closed 140423 01:20:55 innobackupex: completed OK!

Now here’s the trick, because the full backup is encrypted, we will use the xtrabackup_checkpoints file separately saved by xtrabackup to the--extra-lsn-dirpath we specified above to get the LSN and use that for our next incremental backup.

LAST_LSN=$( cat /ssd/msb/msb_5_5_360/bkp/lsns/$CURDATE/xtrabackup_checkpoints \ | grep to_lsn | cut -d'=' -f2 ) CURDATE=$(date +%Y-%m-%d_%H_%M_%S) mkdir /ssd/msb/msb_5_5_360/bkp/lsns/$CURDATE

Above, we get the LSN value and assign it to a variable. Similarly, we created a new CURDATE string for our incremental backup to use and created a new directory for the xtrabackup_checkpoints file. If you plan to create another incremental backup based off of what we are about to take now, you will use this next xtrabackup_checkpoints file to get LAST_LSN.

With the up and coming Percona XtraBackup 2.2.1, you will not need--extra-lsn-diranymore nor parse thextrabackup_checkpointsfile anymore for this purpose. A new feature that will allow the user to save backup metadata to an InnoDB table will be available.

So, now that we got our$LAST_LSNvalue, we execute our incremental backup with the command:

innobackupex --defaults-file=/ssd/msb/msb_5_5_360/my.sandbox.cnf \ --extra-lsndir=/ssd/msb/msb_5_5_360/bkp/lsns/$CURDATE \ --encrypt=AES256 --encrypt-key-file=/ssd/msb/msb_5_5_360/bkp/backups.key \ --no-timestamp --incremental --incremental-lsn $LAST_LSN \ /ssd/msb/msb_5_5_360/bkp/incr/$CURDATE

Again, based on the output, my backup was created at:

innobackupex: Backup created in directory '/ssd/msb/msb_5_5_360/bkp/incr/2014-04-23_01_21_00' 140423 01:21:47 innobackupex: Connection to database server closed 140423 01:21:47 innobackupex: completed OK!

No we have a full backup and an incremental backup, of course to make sure our backups are usable, we’d like to validate them. To do that, our first step is to decrypt both full and incremental backups. innobackupex has another handy--decryptoption for that, you can even use--parallelto make it faster.

innobackupex --decrypt=AES256 \ --encrypt-key-file=/ssd/msb/msb_5_5_360/bkp/backups.key \ /ssd/msb/msb_5_5_360/bkp/full/2014-04-23_01_20_46 innobackupex --decrypt=AES256 \ --encrypt-key-file=/ssd/msb/msb_5_5_360/bkp/backups.key \ /ssd/msb/msb_5_5_360/bkp/incr/2014-04-23_01_21_00

Once the backups are decrypted, we can go through the usual process of preparing a full and incremental backups as described on the manual.

innobackupex --defaults-file=/ssd/msb/msb_5_5_360/my.sandbox.cnf \ --apply-log --redo-only /ssd/msb/msb_5_5_360/bkp/full/2014-04-23_01_20_46 innobackupex --defaults-file=/ssd/msb/msb_5_5_360/my.sandbox.cnf \ --apply-log --redo-only /ssd/msb/msb_5_5_360/bkp/full/2014-04-23_01_20_46 \ --incremental-dir=/ssd/msb/msb_5_5_360/bkp/incr/2014-04-23_01_21_00 innobackupex --defaults-file=/ssd/msb/msb_5_5_360/my.sandbox.cnf \ --apply-log /ssd/msb/msb_5_5_360/bkp/full/2014-04-23_01_20_46

The post Encrypted and incremental MySQL backups with Percona XtraBackup appeared first on MySQL Performance Blog.

Percona Live 2014 behind; MySQL ahead

Latest MySQL Performance Blog posts - April 23, 2014 - 6:00am

I started using MySQL 11 years ago.  That’s not too long compared to other people in the industry, but nonetheless here’s my perspective on the state of the MySQL industry after attending Percona Live MySQL Conference & Expo 2104.

In short, the attitude around MySQL has changed from “Does it work?” to “Is it fast and reliable?” to “How do we manage it?” To further generalize, these periods correspond roughly to the original MySQL AB team, Percona and Oracle, and the last period is the current period so key players are emerging, like WebScaleSQL.

Does it work?

Peter Zaitsev said in one of his keynote talks that MySQL used to be considered a toy.  Today that assessment is wrong, or at least very difficult to defend.  The proof is that nearly all of the largest and most successful web companies today use it (Facebook, Twitter, Google, Tumblr, Box, etc.), and myriad other web and traditional companies use it too.  MySQL works, but it’s not a panacea and successful companies realize this.  I’ll talk more about this at this later.

Is it fast and reliable?

The have been rough spots, like MySQL 5.0 and the MySQL-Sun-Oracle transition, but MySQL is past these.  The history is, of course, more nuanced but generally speaking those rough spots gave rise to Percona.  Fast and reliable has been at the heart of Percona Server even before it was Percona Server (i.e. when it was still a collection of patches).  Other projects and companies were created during this time, but in my biased opinion Percona held the fort.  When MySQL became an Oracle product, the collective MySQL conscience waited to see if they would kill or revive it.  They have revived it.  MySQL 5.6 is great, and 5.7 is looking good so far too.  The work Percona did and still does combined with Oracle’s work has made MySQL a product you can bet the business on.  In other words: MySQL won’t fail you.  Moreover, the work at companies like Fusion-io proves that the state of the art apropos performance is alive and well, as highlighted by Nisha Talagala’s excellent keynote “The Evolution of MySQL in the All-Flash Datacenter.”

How do we manage it?

MySQL has become business.  Let me put it another way that’s difficult to say because I consider myself a hacker but I think it’s true nonetheless (and I’ve heard others say it too): MySQL isn’t cool any more.  ”Cool” is the context of technology a weird euphemism for “new and unstable but useful and promising”.  MySQL was all these in past years, but now it’s mature, proven to be stable and useful, and it has delivered on the promise of being a free, open-source RDBMS that’s stable and useful.  As a business product, the concern is manageability: deploying, scaling, monitoring, maintaining, etc.  These are not new concerns; the difference today is focus: in the past these mattered less because we still had core usability and performance issues, but today MySQL usability and performance are solved problems.  Mark Callaghan’s PLMCE 2012 keynote was aptly titled: “What Comes Next?”  In 2012 he saw that MySQL at core was stable, so he turned his attention to things around it which can be pain points, like migrating shards and row compression.  In other words, his message was not “here’s what we still need to fix in MySQL”, it was “here’s what we need to manage MySQL sanely.”  He reiterated this message in a recent blog post, “Modern databases“:

“We have much needed work on write-optimized database algorithms – Tokutek, LevelDB, RocksDB, HBase, Cassandra. We also get reports of amazing performance. I think there is too much focus on peak performance and not enough on predictable performance and manageability.”

In my humble opinion, this is the current state of the MySQL industry: learning, developing, and establishing how to manage MySQL.  Although the new WebScaleSQL collaboration is focused prima facie on performance at scale, as Mark said in his blog post, “Predictable performance is part of manageability.”  There are many other companies and projects for managing various aspects of MySQL, like ClusterControl for MySQL Galera by Severalnines and Propagator by Outbrain (both were are PLMCE this year).

Earlier I said “MySQL works, but it’s not a panacea and successful companies realize this.”  Successful companies like Dyn (who presented this year) use MySQL and other technologies.  It’s important to realize that MySQL is one part of a business.  The other parts are Hadoop, Redis, memcached, etc.  OpenStack and other cloud platforms are increasingly mentioned, too.  Therefore, managing MySQL is only half the story.  The other half is understanding MySQL’s place in and interaction with other business technologies.

In summary, for me Percona Live MySQL Conference & Expo 2014 highlighted how MySQL has become one castle in the kingdom whereas 10 years ago it was an outpost on the frontier.  People no longer ask “is MySQL fast and reliable?” Instead they ask, “how can we manage 100 MySQL instances and a handful of other technologies with 2 full-time DBAs?”  The MySQL industry will continue to add features and improve performance, but we have shifted from doing that in the service of making a stable product to making a manageable product.

The post Percona Live 2014 behind; MySQL ahead appeared first on MySQL Performance Blog.

Percona University @Montevideo, FISL & São Paulo MySQL Meetup

Latest MySQL Performance Blog posts - April 21, 2014 - 2:20pm

Peter Zaitsev at last year’s Percona University event in Montevideo

Later this week I’m excited to depart on a trip to South America. First I will stop in Montevideo, Uruguay, to meet with Percona’s team out there as well as to participate in our next Percona University event on April 29.

For those who do not know, Percona University events are free to attend and packed with technical presentations about MySQL and surrounding technologies, delivered by members of the Percona team and community speakers. Even though attendance is free, because space is limited you will need to register now to ensure that we have space for everyone.

Next stop will be Brazil, where I’ll be speaking at the local São Paulo MySQL Meetup group on May 6 and then on to Porto Alegre where I’ll speak at FISL (the International Free Software Forum) (May 9 14:00) about the “Practical Optimization of MySQL.”

If you’re also attending FISL and would like to connect at the show drop me a note at ceo AT percona.com.

The post Percona University @Montevideo, FISL & São Paulo MySQL Meetup appeared first on MySQL Performance Blog.

Using Apache Hadoop and Impala together with MySQL for data analysis

Latest MySQL Performance Blog posts - April 21, 2014 - 6:43am

Apache Hadoop is commonly used for data analysis. It is fast for data loads and scalable. In a previous post I showed how to integrate MySQL with Hadoop. In this post I will show how to export a table from  MySQL to Hadoop, load the data to Cloudera Impala (columnar format) and run a reporting on top of that. For the examples below I will use the “ontime flight performance” data from my previous post (Increasing MySQL performance with parallel query execution). I’ve used the Cloudera Manager v.4 to install Apache Hadoop and Impala. For this test I’ve (intentionally) used an old hardware (servers from 2006) to show that Hadoop can utilize the old hardware and still scale. The test cluster consists of 6 datanodes. Below are the specs:

PurposeServer specsNamenode, Hive metastore, etc + Datanodes2x PowerEdge 2950, 2x L5335 CPU @ 2.00GHz, 8 cores, 16G RAM, RAID 10 with 8 SAS drivesDatanodes only4x PowerEdge SC1425, 2x Xeon CPU @ 3.00GHz, 2 cores, 8G RAM, single 4TB drive

As you can see those a pretty old servers; the only thing I’ve changed is added a 4TB drive to be able to store more data. Hadoop provides redundancy on the server level (it writes 3 copies of the same block to all datanodes) so we do not need RAID on the datanodes (need redundancy for namenodes thou).

Data export

There are a couple of ways to export data from MySQL to Hadoop. For the purpose of this test I have simply exported the ontime table into a text file with:

select * into outfile '/tmp/ontime.psv'  FIELDS TERMINATED BY ',' from ontime;

(you can use “|” or any other symbol as a delimiter) Alternatively, you can download data directly from www.transtats.bts.gov site using this simple script:

for y in {1988..2013} do for i in {1..12} do u="http://www.transtats.bts.gov/Download/On_Time_On_Time_Performance_${y}_${i}.zip" wget $u -o ontime.log unzip On_Time_On_Time_Performance_${y}_${i}.zip done done

Load into Hadoop HDFS

First thing we will need to do is to load data into HDFS as a set of files. Hive or Impala it will work with a directory to which you have imported your data and concatenate all files inside this directory. In our case it is easy to simply copy all our files into the directory inside HDFS

$ hdfs dfs -mkdir /data/ontime/ $ hdfs -v dfs -copyFromLocal On_Time_On_Time_Performance_*.csv /data/ontime/

 Create external table in Impala

Now, when we have all data files loaded we can create an external table:

CREATE EXTERNAL TABLE ontime_csv ( YearD int , Quarter tinyint , MonthD tinyint , DayofMonth tinyint , DayOfWeek tinyint , FlightDate string , UniqueCarrier string , AirlineID int , Carrier string , TailNum string , FlightNum string , OriginAirportID int , OriginAirportSeqID int , OriginCityMarketID int , Origin string , OriginCityName string , OriginState string , OriginStateFips string , OriginStateName string , OriginWac int , DestAirportID int , DestAirportSeqID int , DestCityMarketID int , Dest string , ... ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE  LOCATION '/data/ontime';

Note the “EXTERNAL” keyword and LOCATION (LOCATION points to a directory inside HDFS, not a file). The impala will create a meta information only (will not modify the table). We can query this table right away, however, impala will need to scan all files (full scan) for queries.

Example:

[d30.local:21000] > select yeard, count(*) from ontime_psv group by yeard; Query: select yeard, count(*) from ontime_psv group by yeard +-------+----------+ | yeard | count(*) | +-------+----------+ | 2010 | 6450117 | | 2013 | 5349447 | | 2009 | 6450285 | | 2002 | 5271359 | | 2004 | 7129270 | | 1997 | 5411843 | | 2012 | 6096762 | | 2005 | 7140596 | | 1999 | 5527884 | | 2007 | 7455458 | | 1994 | 5180048 | | 2008 | 7009726 | | 1988 | 5202096 | | 2003 | 6488540 | | 1996 | 5351983 | | 1989 | 5041200 | | 2011 | 6085281 | | 1998 | 5384721 | | 1991 | 5076925 | | 2006 | 7141922 | | 1993 | 5070501 | | 2001 | 5967780 | | 1995 | 5327435 | | 1990 | 5270893 | | 1992 | 5092157 | | 2000 | 5683047 | +-------+----------+ Returned 26 row(s) in 131.38s

(Note that “group by” will not sort the rows, unlike MySQL. To sort we will need to add “ORDER BY yeard”)

Explain plan:

Query: explain select yeard, count(*) from ontime_csv group by yeard +-----------------------------------------------------------+ | Explain String | +-----------------------------------------------------------+ | PLAN FRAGMENT 0 | | PARTITION: UNPARTITIONED | | | | 4:EXCHANGE | | | | PLAN FRAGMENT 1 | | PARTITION: HASH_PARTITIONED: yeard | | | | STREAM DATA SINK | | EXCHANGE ID: 4 | | UNPARTITIONED | | | | 3:AGGREGATE (merge finalize) | | | output: SUM(COUNT(*)) | | | group by: yeard | | | | | 2:EXCHANGE | | | | PLAN FRAGMENT 2 | | PARTITION: RANDOM | | | | STREAM DATA SINK | | EXCHANGE ID: 2 | | HASH_PARTITIONED: yeard | | | | 1:AGGREGATE | | | output: COUNT(*) | | | group by: yeard | | | | | 0:SCAN HDFS | | table=ontime.ontime_csv #partitions=1/1 size=45.68GB | +-----------------------------------------------------------+ Returned 31 row(s) in 0.13s

As we can see it will scan 45G of data.

Impala with columnar format and compression

The great benefit of the impala is that it supports columnar format and compression. I’ve tried the new “parquet” format with “snappy” compression codec. As our table is very wide (and de-normalized) it will help alot to use columnar format. To take advantages of the “parquet” format we will need to load data into it, which is easy to do when we already have a table inside impala and files inside HDFS:

[d30.local:21000] > set PARQUET_COMPRESSION_CODEC=snappy; [d30.local:21000] > create table ontime_parquet_snappy LIKE ontime_parquet_snappy STORED AS PARQUET; [d30.local:21000] > insert into ontime_parquet_snappy select * from ontime_csv; Query: insert into ontime_parquet_snappy select * from ontime_csv Inserted 152657276 rows in 729.76s

Then we can test our query against the new table:

Query: explain select yeard, count(*) from ontime_parquet_snappy group by yeard +---------------------------------------------------------------------+ | Explain String | +---------------------------------------------------------------------+ | PLAN FRAGMENT 0 | | PARTITION: UNPARTITIONED | | | | 4:EXCHANGE | | | | PLAN FRAGMENT 1 | | PARTITION: HASH_PARTITIONED: yeard | | | | STREAM DATA SINK | | EXCHANGE ID: 4 | | UNPARTITIONED | | | | 3:AGGREGATE (merge finalize) | | | output: SUM(COUNT(*)) | | | group by: yeard | | | | | 2:EXCHANGE | | | | PLAN FRAGMENT 2 | | PARTITION: RANDOM | | | | STREAM DATA SINK | | EXCHANGE ID: 2 | | HASH_PARTITIONED: yeard | | | | 1:AGGREGATE | | | output: COUNT(*) | | | group by: yeard | | | | | 0:SCAN HDFS | | table=ontime.ontime_parquet_snappy #partitions=1/1 size=3.95GB | +---------------------------------------------------------------------+ Returned 31 row(s) in 0.02s

As we can see it will scan much smaller amount of data: 3.95 (with compression) compared to 45GB

Results:

Query: select yeard, count(*) from ontime_parquet_snappy group by yeard +-------+----------+ | yeard | count(*) | +-------+----------+ | 2010 | 6450117 | | 2013 | 5349447 | | 2009 | 6450285 | ... Returned 26 row(s) in 4.17s

And the response time is much better as well.

Impala complex query example

I’ve used the complex query from my previous post. I had to adapt it for use with Impala: it does not support “sum(ArrDelayMinutes>30)” notation but “sum(if(ArrDelayMinutes>30, 1, 0)” works fine.

select min(yeard), max(yeard), Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_parquet_snappy WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and flightdate < '2010-01-01' GROUP by carrier HAVING cnt > 100000 and max(yeard) > 1990 ORDER by rate DESC LIMIT 1000;

The query is intentionally designed the way it does not take advantage of the indexes: most of the conditions will only filter out less than 30% of the data.

Impala results:

+------------+------------+---------+----------+-----------------+------+ | min(yeard) | max(yeard) | carrier | cnt | flights_delayed | rate | +------------+------------+---------+----------+-----------------+------+ | 2003 | 2009 | EV | 1454777 | 237698 | 0.16 | | 2003 | 2009 | FL | 1082489 | 158748 | 0.15 | | 2006 | 2009 | XE | 1016010 | 152431 | 0.15 | | 2003 | 2009 | B6 | 683874 | 103677 | 0.15 | | 2006 | 2009 | YV | 740608 | 110389 | 0.15 | | 2003 | 2005 | DH | 501056 | 69833 | 0.14 | | 2001 | 2009 | MQ | 3238137 | 448037 | 0.14 | | 2004 | 2009 | OH | 1195868 | 160071 | 0.13 | | 2003 | 2006 | RU | 1007248 | 126733 | 0.13 | | 2003 | 2006 | TZ | 136735 | 16496 | 0.12 | | 1988 | 2009 | UA | 9593284 | 1197053 | 0.12 | | 1988 | 2009 | AA | 10600509 | 1185343 | 0.11 | | 1988 | 2001 | TW | 2659963 | 280741 | 0.11 | | 1988 | 2009 | CO | 6029149 | 673863 | 0.11 | | 2007 | 2009 | 9E | 577244 | 59440 | 0.10 | | 1988 | 2009 | US | 10276941 | 991016 | 0.10 | | 2003 | 2009 | OO | 2654259 | 257069 | 0.10 | | 1988 | 2009 | NW | 7601727 | 725460 | 0.10 | | 1988 | 2009 | DL | 11869471 | 1156267 | 0.10 | | 1988 | 2009 | AS | 1506003 | 146920 | 0.10 | | 1988 | 2005 | HP | 2607603 | 235675 | 0.09 | | 2005 | 2009 | F9 | 307569 | 28679 | 0.09 | | 1988 | 1991 | PA | 206841 | 19465 | 0.09 | | 1988 | 2009 | WN | 12722174 | 1107840 | 0.09 | +------------+------------+---------+----------+-----------------+------+ Returned 24 row(s) in 15.28s

15.28 seconds is significantly faster than original MySQL results (15 min 56.40 sec without parallel execution and  5 min 47 with the parallel execution). However, this is not “apple to apple comparison”:

  • MySQL will scan 45G of data and Impala with parquet will only scan 3.5G
  • MySQL will run on a single server, Hadoop + Impala will run in parallel on 6 servers.

Nevertheless, Hadoop + Implala shows impressive performance and ability to scale out the box, which can help a lot with the large data volume analysis.

Conclusion

Hadoop + Impala will give us an easy way to analyze large datasets using SQL with the ability to scale even on the old hardware.

In my next posts I will plan to explore:

As always, please share your thoughts in the comments.

The post Using Apache Hadoop and Impala together with MySQL for data analysis appeared first on MySQL Performance Blog.

Percona Software in Ubuntu 14.04 LTS (Trusty Tahr) release

Latest MySQL Performance Blog posts - April 18, 2014 - 6:22am

I’d like to congratulate Canonical with the new Ubuntu 14.04 LTS (Trusty Tahr) Release, it really looks like a great release, and I say it having my own agenda It looks even more great because it comes with a full line of Percona Software.
If you install Ubuntu 14.04 and run aptitude search you will find:

Percona Toolkit and Percona XtraBackup are up to the latest versions, but Percona Server and Percona XtraDB Cluster comes with 5.5 versions, and it is in line with default MySQL version, which again is 5.5.

I expect this release will make it much easier for users to get familiar with our software, so you can go and try this today!

The post Percona Software in Ubuntu 14.04 LTS (Trusty Tahr) release appeared first on MySQL Performance Blog.

How to find bugs in MySQL

Latest MySQL Performance Blog posts - April 16, 2014 - 10:00pm

Finding bugs in MySQL is not only fun, it’s also something I have been doing the last four years of my life.

Whether you want to become the next Shane Bester (who is generally considered the most skilled MySQL bug hunter worldwide), or just want to prove you can outsmart some of the world’s best programmers, finding bugs in MySQL is a skill not reserved anymore to top QA engineers armed with a loads of scripts, expensive flash storage and top-range server hardware. Off course, for professionals that’s still the way to go, but now anyone with an average laptop and a standard HDD can have a lot of fun trying to find that elusive crash…

 If you follow this post carefully, you may well be able to find a nice crashing bug (or two) running RQG (an excellent database QA tool). Linux would be the preferred testing OS, but if you are using Windows as your main OS, I would recommend getting Virtual Box and running a Linux guest in a suitably sized (i.e. large) VM. In terms of the acronym “RQG”, this stands for “Random Query Generator,” also named “randgen.”

If you’re not just after finding any bug out there (“bug hunting”), you can tune the RQG grammars (files that define what sort of SQL RQG executes) to more or less match your “issue area.” For example, if you are always running into a situation where the server crashes on a DELETE query (as seen at the end of the mysqld error log for example), you would want an SQL grammar that definitely has a variety of DELETE queries in it. These queries should be closely matched with the actual crashing query – crashes usually happen due to exactly the same, or similar statements with the same clauses, conditions etc.

But, taking a step back, to get started with RQG, you can either use the setup_server.sh script in percona-qa (more on how to obtain percona-qa from Launchpad using bazaar below), or you can use yum to manually install a set of modules installed on your Linux machine:

$ sudo yum install kernel-devel wget patch make cmake automake autoconf libtool bzr gtest zlib-static \
gcc gcc-c++ ncurses-devel libaio libaio-devel bison valgrind perl-DBD-mysql cpan zlib-devel \
bzip2 valgrind-devel svn pam-devel openssl openssl-dev screen strace sysbench

(Note that I have included a few extra items needed for Percona Server & a few items that are handy like screen and sysbench for example. Note also that you can change yum to apt-get, though this may require a few package name changes – search the web for info if you want to use apt-get.)

After these modules are installed, you can:

1. Pull the tree from Launchpad:
$ bzr branch lp:randgen

2. Make sure to have the many (Perl modules etc.) dependencies installed. Follow:
https://github.com/RQG/RQG-Documentation/wiki/RandomQueryGeneratorQuickStart#wiki-Prerequisites

3. Do a quick test to see if RQG is working fine:
https://github.com/RQG/RQG-Documentation/wiki/RandomQueryGeneratorQuickStart#wiki-Running_your_first_test

If all that worked, then congratulations, you now have simple RQG runs working. Let’s now look into how RQG is structured.

The way you can think about RQG “execution” in a hierarchical tree format is like this: combinations.pl starts runall.pl which starts gentest.pl which may start gendata.pl. You don’t have to use combinations.pl, (or even runall.pl as you would have learned by following the ‘running your first test’ example above; i.e. mysqld can be started manually and gentest.pl can then be used for testing the already started server), but runall.pl and definitely combinations.pl surely add more power to our testing, as we will see soon.

In terms of the various Perl scripts (.pl) listed in the tree, it is:

- combinations.pl which generates the many different ways (read ‘trials’) in which RQG is started (using runall.pl) with various options to mysqld etc.
- runall.pl which starts/stops mysqld, does various high-level checking etc. (in other words; ‘a basic RQG run’)
- gentest.pl which is the executor component (iow ‘the actual basic RQG test’)
- gendata.pl which setups the data (tables + data)

If you know the performance testing tool SysBench, you may compare gentest.pl with an actual SysBench run and gendata.pl with a “sysbench prepare.”

To get into real bug hunting territory, we will use combinations.pl to do an extensive random testing run against a server. You never know what you may find. Small warning; before you log your newly discovered bug, make sure that it is not logged on bugs.mysql.com (for MySQL Server bugs) or on bugs.launchpad.net/percona-server (for Percona Server bugs) already.

In this example, we will be testing Percona Server, as the combination.pl (.cc) grammar we use is optimized for Percona Server. If you would like to test the MySQL server, you can build your own grammars, or use one of the many grammars available in RQG (though they are not many combinations.pl grammars in RQG yet. There are plenty of (less-powerful) runall.pl grammars however). For a MySQL-compatible combinations.pl grammar which tests the optimizer, see randgen/conf/optimizer/starfish.cc – a grammar which I developed whilst working for oracle.

Another very extensive grammar set, usable with Percona Server 5.6 (we call this our ‘base grammar’ as it test many features developed for Percona Server), can be found in randgen/conf/percona_qa/5.6/* – edit and then use 5.6.sh – the startup script (in this set WORKDIR and RQG_DIR) and 5.6.cc – the combinations file (in this change path names for the optimized/debug and valgrind compiled server to match your system) to get started . More on this below.

An earlier and more limited version of this base grammar can be found in your randgen tree; go to randgen/conf/percona_qa/ and review the files there. This more limited base grammar can be used for testing any version of Percona Server, or you can follow along and use the 5.6 grammar mentioned above and test Percona Server 5.6 – the same basic steps (detailed below) apply.

In the percona_qa directory, the percona_qa.sh script is the start script (like 5.6.sh), percona_qa.yy file contains the SQL (like 5.6.yy etc.), the .zz file contains the data definitions, and finally the .cc file is a combinations.pl setup which “combines” various options from various blocks. Combinations.pl has great bug-hunting power.

Side note: you can read more about how the option blocks work at:
https://github.com/RQG/RQG-Documentation/wiki/RandomQueryGeneratorCombinations

All you need to do to get an exhaustive test run started, is edit some options (assuming you have Percona Server installed on your test machine already) and start the percona_qa.sh script:

1. Edit the “percona_qa.sh” script and set the WORKDIR and RQG_DIR variables (In regards RQG_DIR, the script will normally assume that randgen is stored under WORKDIR, but you can change RQG_DIR to point to your randgen download path instead, for example RQG_DIR=/randgen).

2. Edit the “percona_qa.cc” script and point it to the location of your server in the –basedir= setting (i.e. replace “/Percona-Server” with “/path_to_your_Percona_Server_installation”.

For the moment, you can just use a standard Percona Server installation, and remove the Valgrind line directly under the one we just edited (use “dd” in vim), but once you get a bit more professional, compiling from source (“building”) is the way to to go.

The reason for building yourself is that if you use a debug compiled server (i.e. execute ./build/build-binary.sh –debug in a Percona Server source download) or a Valgrind instrumented compiled server (i.e. execute ./build/build-binary.sh –debug –valgrind in a Percona Server source download) you will find more bugs (the debug server contains more developer debug asserts etc.).

Note you can use the “build_percona.sh” in the percona-qa Launchpad project (more on this below) to quickly build an optimized, debug and Valgrind server from source. build_mysql.sh does the same for MySQL server.

3. Now you’re ready to go; execute ./percona_qa.sh and watch the screen carefully. You’ll likely immediately see some STATUS_ENVIRONMENT_FAILURE runs. This is quite common and means you have made a little error somewhere a long the way. Stop the run (ctrl+z, then kill -9 all relevant pids, then execute “fg”). Now edit the files as needed (check all the logs, starting with the failed trials ‘trial<no>.log’, etc.). Then start the run again. If your machine is used for testing only (i.e. no production programs running), you can use the following quick command to kill all relevant running mysqld, perl and Valgrind processes:

ps -ef | grep `whoami` | egrep "mysql|perl|valgrind" | grep -v "grep" | awk '{print $2}' | xargs sudo kill -9;

4. Once you’re run is going, leave it going for a few hours, or a few days (we regularly test with runs that go for 2-5 days or more), and then start checking logs (trial<nr>.log is the one you want to study first. Use “:$” in vim to jump to the end of the file, or “:1″ to jump back to the first line).

5. Once you get a bit more professional, use the percona-qa scripts (bzr branch lp:percona-qa) to quickly handle trials of interest. You may want to initially checkout rqg_results.sh, analyze_crash.sh, startup.sh, build_percona.sh, delete_single_trial.sh and keep_single_trial.sh – simply execute them without parameters to get an initial idea on how to use them. These scripts greatly reduce the efforts required when analyzing multiple trials.

6. Finally, for those of you needing the reduce long SQL testcases (from any source) quickly, see reducer.sh in randgen/util/reducer/reducer.sh – it’s a multi-threaded high-performance simplification script I developed whilst working at oracle. They kindly open sourced it some ago. You may then also want to checkout parse_general_log.pl in the percona-qa scripts listed in point 5 above. This script parses a general log created by mysqld (–general_log option) into a ready-to-use SQL trace.

If you happen to find a bug, share the joy! If you happen to run into issues, post your questions below so others who run into the same can find answers quickly. Also feel free to share any tips you find while playing around with this.

Enjoy!

The post How to find bugs in MySQL appeared first on MySQL Performance Blog.

‘Open Source Appreciation Day’ draws OpenStack, MySQL and CentOS faithful

Latest MySQL Performance Blog posts - April 16, 2014 - 3:00am

210 people registered for the inaugural “Open Source Appreciation Day” March 31 in Santa Clara, Calif. The event will be held each year at Percona Live henceforth.

To kick off the Percona Live MySQL Conference & Expo 2014, Percona held the first “Open Source Appreciation Day” on Monday, March 31st. Over 210 people registered and the day’s two free events focused on CentOS and OpenStack.

The OpenStack Today event brought together members of the OpenStack community and MySQL experts in an afternoon of talks and sharing of best practices for both technologies. After a brief welcome message from Peter Zaitsev, co-founder and CEO of Percona, Florian Haas shared an introduction to OpenStack including its history and the basics of how it works.

Jay Pipes delivered lessons from the field based on his years of OpenStack experience at AT&T, at Mirantis, and as a frequent code contributor to the project. Jay Janssen, a Percona managing consultant, complemented Jay Pipes’ talk with a MySQL expert’s perspective of OpenStack. He also shared ways to achieve High Availability using the latest version of Galera (Galera 3) and other new features found in the open source Percona XtraDB Cluster 5.6.

Amrith Kumar’s presentation focused on the latest happenings in project Trove, OpenStack’s evolving DBaaS component, and Tesora’s growing involvement. Amrith also won quote of the day for his response to a question about the difference between “elastic” and “scalable.” Amrith: “The waistband on my trousers is elastic. It is not scalable.” Sandro Mazziotta wrapped up the event by sharing the challenges and opportunities of OpenStack from both an integrator as well as operator point of view based on the customer experiences of eNovance.

OpenStack Today was made possible with the support of our sponsors, Tesora and hastexo. Here are links to presentations from the OpenStack Today event. Any missing presentations will soon be added to the OpenStack Today event page.

Speakers in the CentOS Dojo Santa Clara event shared information about the current status of CentOS, the exciting road ahead, and best practices in key areas such as system administration, running MySQL, and administration tools. Here’s a rundown of topics and presentations from the event. Any missing presentations will soon be added to the CentOS Dojo Santa Clara event page.

  • Welcome and Housekeeping
    Karsten Wade, CentOS Engineering Manager, Red Hat
  • The New CentOS Project
    Karsten Wade, CentOS Engineering Manager, Red Hat
  • Systems Automation and Metrics at Pinterest
    Jeremy Carroll, Operations Engineer, Pinterest
  • Software Collections on CentOS
    Joe Brockmeier, Open Source & Standards, Red Hat
  • Two Years Living Your Future
    Joe Miller, Lead Systems Engineer, Pantheon
  • Running MySQL on CentOS Linux
    Peter Zaitsev, CEO and Co-Founder, Percona
  • Notes on MariaDB 10
    Michael Widenius, Founder and CTO, MariaDB Foundation
  • Happy Tools
    Jordan Sissel, Systems Engineer, DreamHost

Thank you to all of the presenters at the Open Source Appreciation Day events and to all of the attendees for joining.

I hope to see you all again this November 3-4  at Percona Live London. The Percona Live MySQL Conference and Expo 2015 will also return to the Hyatt Santa Clara and Santa Clara Convention Center from April 13-16, 2015 – watch for more details in the coming months!

The post ‘Open Source Appreciation Day’ draws OpenStack, MySQL and CentOS faithful appeared first on MySQL Performance Blog.

percona-millipede – Sub-second replication monitor

Latest MySQL Performance Blog posts - April 15, 2014 - 6:00am

I recently helped a client implement a custom replication delay monitor and wanted to share the experience and discuss some of the iterations and decisions that were made. percona-millipede was developed in conjunction with Vimeo with the following high-level goal in mind: implement a millisecond level replication delay monitor and graph the results.  Please visit http://making.vimeo.com for more information and thanks to Vimeo for sharing this tool!

Here is the rough list of iterations we worked through in developing this tool/process:

  1. Standard pt-heartbeat update/monitor
  2. Asynchronous, threaded update/monitor tool
  3. Synchronized (via zeroMQ), threaded version of the tool
pt-heartbeat

Initially, we had been running pt-heartbeat (with default interval of 1.0) to monitor real replication delay.  This was fine for general alerting, but didn’t allow us to gain deeper insight into the running slaves.  Even when pt-heartbeat says it is “0 seconds behind”, that can actually mean the slave is up to .99 seconds behind (which in SQL time, can be a lot).  Given the sensitivity to replication delay and the high end infrastructure in place (Percona Server 5.6, 384G RAM, Virident and FusionIO PCI-Flash cards), we decided it was important to absorb the extra traffic in an effort gain further insight into the precise points of any slave lag.

There had been discussion about tweaking the use of pt-heartbeat (–interval = .01 with reduced skew) to look at the sub-second delay, but there were some other considerations:

  • The tool needed to update/monitor multiple hosts from a single process (i.e. we didn’t want multiple pt-heartbeat processes to track)
  • Native integration with existing statsd/graphite system

We likely could’ve achieved the above using pt-heartbeat, but it would’ve required a custom wrapper application to handle the other pieces (threads/subprocesses/inter-process communication).  As the main gist of pt-heartbeat is fairly straightforward (select ts from heartbeat table, calculate delay), it was decided to mimic that logic in a custom application that was configurable and met the other goals.

First Iteration – Async update/monitor

The first iteration was to spin up several threads within a main process (I chose Python for this, but it could be anything really) and set each in a loop with a set delay (.01 seconds for example).  One thread sends update statements with a timestamp, the other threads simply select that timestamp and calculate how long ago the row was updated (i.e. current time – row timestamp):

This iteration was better (we could see replication delay of 1-999 ms and isolate some sub-optimal processes), but there was still some concern.  When testing the solution, we noticed that when pointing the monitor against a single box (for both the update and select), we were still seeing replication delay.  After some discussion, it became apparent that using the current CPU time as the baseline was introducing time to calculate the delay as part of the delay.  Further, since these threads weren’t synchronized, there was no way to determine how long after the update statement the select was even run.

Final Iteration – ZeroMQ update/monitor

Based on this analysis, we opted to tweak the process and use a broadcast model to keep the monitor threads in sync.  For this, I chose to use ZeroMQ for the following reasons:

  • Built in PUB/SUB model with filtering – allows for grouping masters with slaves
  • Flexibility in terms of synchronization (across threads, processes, or servers – just a config tweak to the sockets)

After the update, here was the final architecture:

In this model, the update thread publishes the timestamp that was set on the master and each monitor thread simply waits as a consumer and then checks the timestamp on the slave vs the published timestamp.  Synchronization is built in using this model and we saw much more accurate results.  As opposed to sitting at 5-10ms all the time with spikes up to 50ms, we found 0ms in most cases (keep in mind that means 0ms from an application standpoint with network latency, time to process the query, etc) with spikes up to 40ms.  Here is a sample graph (from Graph Explorer on top of statsd, courtesy of Vimeo) showing some micro spikes in delay that pointed us to a process that was now noticeable and able to be optimized:

While this process was what the client needed, there are some things that I’d like to point out about this approach that may not apply (or even hinder) other production workloads:

  • The frequency of update/select statements added several hundred queries per second
    • You could configure less frequent update/selects, but then you may see less accuracy
    •  The longer delay between updates, the less chance you will see delay
  • For replication delay monitoring (i.e. Nagios), 1 second granularity is plenty
    • Typically, you would only alert after several seconds of delay were noticed

Naturally, there are some other factors that can impact the delay/accuracy of this system (pub/sub time, time to issue select, etc), but for the purpose of isolating some sub-optimal processes at the millisecond level, this approach was extremely helpful.

Stay tuned for a followup post where I’ll share the tool and go over it’s installation, configuration, and other details!

The post percona-millipede – Sub-second replication monitor appeared first on MySQL Performance Blog.

Advisory on Heartbleed (CVE-2014-0160) for Percona’s customers and users

Latest MySQL Performance Blog posts - April 14, 2014 - 8:03am

Other the last few days, the Percona team has spent a lot of time evaluating the impact of the Heartbleed bug (CVE-2014-0160) for our customers and for the users of our software. We published a formal disclosure a few days ago. However, I thought a quick summary and some additional information would be good to provide for our MySQL Performance Blog readers.

First, I want to point out that “Heartbleed” is an issue in a commonly used third-party library which typically comes with your operating system, so there is a lot of software which is impacted. An openly exposed service, which is typically a website or some form of API, can potentially cause the biggest impact for anyone. Even though we talk a lot about MySQL Server (and its variants), it will not be the primary concern for organizations following best practices and not exposing their MySQL server to the open Internet.

Second, if you take care of patching your operating system, this will take care of Percona Server, MariaDB or MySQL Server (see note below) as well as other software which uses the OpenSSL library as long as it is linked dynamically. It is highly recommended to dynamically link OpenSSL exactly to take care of such security issues with a single library update and not wait for separate security updates for multiple software packages. Note that updating the library is not enough – you need to restart the service in order for the new library to be loaded. In most cases, I recommend a full system restart as the simplest way to guaranty that all processes using the library have been restarted.

Third, it is worth noting that not all MySQL variants have been impacted and not in all cases. Obviously, your MySQL Server is not impacted if you’re running an operating system which is not vulnerable. You will also not be vulnerable if the MySQL Server or variant you’re using uses yaSSL instead of OpenSSL. In addition, in many cases SSL support is disabled on the server side by default, which might not be the best thing from a security standpoint but can save us from this bug. Finally, in many configurations the SSL/TLS connection setup will take place after initial handshake which does not allow this vulnerability in all cases. I do not have hard numbers but I would guess no more than 10-20% of MySQL (and variants) installations would be vulnerable, even before you look at when they are exposed to the Internet.

To find out whenever MySQL is dynamically compiled with OpenSSL or yaSSL you can use this command:

[root@localhost vagrant]# ldd /usr/sbin/mysqld | grep ssl libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007fb7f4cbc000

It will show “libssl” for server linked with OpenSSL dynamically and it will show no matches for server compiled with yaSSL

It is worth noting as Lenz Grimmer pointed out in a blog post comment that Heartbleed impacts not only vulnerable servers but vulnerable clients can be at risk as well if they connect to a compromised server which implements code specially targeting the clients. This means you want to make sure to update your client machines as well, especially if you’re connecting to a non-trusted MySQL Server.

But enough on Percona Software. There is an impact to Percona web systems as well. The majority of our systems have not been impacted directly because they were running an OpenSSL version which did not have the Heartbleed vulnerability. However, because of how our systems are integrated, there was a small probability that some customer accounts could be exposed through impacted services: https://rdba.percona.com and https://cloud.percona.com. We promptly patched these services last week, regenerated keys, and reset passwords for all accounts which had even a small chance of being impacted.

We believe our teams have acted quickly and appropriately to secure our systems and minimize the chance of information leaks. We will continue to monitor the situation closely and update you via our advisory document if there is any new information needing your attention.

The post Advisory on Heartbleed (CVE-2014-0160) for Percona’s customers and users appeared first on MySQL Performance Blog.

Percona Live MySQL Conference Highlights

Latest MySQL Performance Blog posts - April 10, 2014 - 4:59am

The Percona Live MySQL Conference and Expo 2014 was March 31st through April 4th in Santa Clara, California. I heard numerous positive comments from attendees and saw even more on social media. Our conference team lead by Kortney Runyan pulled together a smooth, enjoyable event which made it easy for attendees to focus on learning and networking. Some of the Percona Live MySQL Conference highlights from this year follow.

Percona Live MySQL Conference Highlights

A few stats for the conference this year versus last year:

  • Total registrations were up nearly 15%
  • Attendees represented 40 countries, up from 36 in 2013
  • 34 companies sponsored the conference this year, up from 33 last year
  • This year’s conference covered 5 days including the Open Source Appreciation Day, up from 4 days last year

The conference could not grow without the support of attendees and sponsors. It’s great to see continued growth in attendees which is driven in large part by the word of mouth endorsements of those who attended in previous years.

Keynotes & SiliconANGLE Interviews Video Recordings

The Percona Live MySQL Conference 2014 featured some great keynote presentations. I would like to thank our great keynote speakers who covered a wide variety of topics including MySQL 5.7, integrating MySQL and Hadoop, SSD storage, OpenStack, and the future of running MySQL at scale. If you couldn’t attend this year, you can watch the keynote recordings by visiting the conference website. I was particularly interested in the presentation on MySQL 5.7 from Tomas Ulin, VP of MySQL Engineering for Oracle, which demonstrated the ongoing improvements and commitment Oracle has made to MySQL. I also found the talk by Boris Renski of Mirantis especially helpful in increasing my understanding of the OpenStack ecosystem and the organizations involved in that space. The talk by Robert Hodges of Continuent on integrating MySQL and Hadoop and the talk by Nisha Talagala from Fusion-io on advances in SSD storage are also definitely worth a look.

We were fortunate to have SiliconANGLE at the conference this year, recording video interviews with a wide range of exhibitors and attendees. Whether you attended the conference or not, I think you’ll find the interviews entertaining and enlightening. You can find the videos on the Percona Live conference website or on the SiliconANGLE website or YouTube channel.

Conference Committee

Thanks go first to the nearly 130 speakers who were chosen based on over 300 submitted proposals. The Conference Committee lead by Shlomi Noach, which represented a wide array of ecosystem participants, created a solid schedule of tutorials and breakout sessions which covered a wide variety of MySQL-related topics at a variety of levels. Thanks go out to all of the Committee members:

  • Shlomi Noach, Engineer, Outbrain
  • Roland Bouman, Software Engineer, Pentaho
  • Tim Callaghan, VP of Engineering, Tokutek
  • Laine Campbell, CEO and Co-Founder, Blackbird
  • Jeremy Cole, Sr. Systems Engineer, Google
  • Todd Farmer, Director, Oracle
  • Jay Janssen, Consulting Lead, Percona
  • Giuseppe Maxia, QA Director, Continuent
  • Cedric Peintre, DBA, Dailymotion
  • Ivan Zoratti, CTO, SkySQL
  • Liz van Dijk, Head of Technical Account Management, Percona

Visit the Percona Live MySQL Conference 2014 website and click the “Session Slides” to download the slides from the sessions. Check back periodically for new sets of slides added by the conference speakers.

Sponsors

We cannot put on this conference without the support of our sponsors. Special thanks go out to our Diamond Plus sponsors, Fusion-io and Continuent, for their support and their knowledgeable keynote speakers. The full list of sponsors is:

Diamond Plus: Continuent, Fusion-io
Platinum: Booking.com
Gold: Machine Zone, Micron, Pythian, SkySQL
Silver: AppDynamics, Box, InfiniDB, Diablo, Galera/Codership, Google, Rackspace, Tesora, Twitter, Yelp
Exhibitors: Attunity, Blackbird (formerly PalominoDB), Dropbox, Etsy, FoundationDB, HGST, Rocket Fuel, RSSBus, ScaleArc, ScaleBase, Severalnines, Sphinx, Tokutek, VividCortex
Other: Devart, Facebook, Webyog, MailChimp
Media: ADMIN Magazine, Datanami, DBTA, Linux Journal, Linux Pro Magazine, O’Reilly, Software Developer’s Journal
Open Source Appreciation Day: Tesora, hastexo, CentOS

Open Source Appreciation Day

Special thanks go to our partners who helped put together the OpenStack Today and CentOS Dojo Santa Clara events on Monday during the inaugural Open Source Appreciation Day. Attendees for these free events were treated to great talks from noteworthy speakers from the OpenStack, CentOS, and MySQL worlds on a wide variety of topics. Watch for links to the presentations in a future blog post.

WebScaleSQL

One of the biggest announcements this year wasn’t made by a sponsor. The announcement of the WebScaleSQL project, which is being driven by team members from Facebook, Google, Twitter, and LinkedIn, got a lot of attention from attendees and from the media. My takeaway was that the project is both a strong vote of confidence in MySQL (versus other MySQL variants or alternatives) and a promise of continued improvement in MySQL for large scale web applications in the future.

Percona Live London 2014 and Percona Live MySQL Conference 2015 Dates

Our next event is Percona Live London which is November 2-3, 2014. The Percona Live MySQL Conference and Expo 2015 will be April 13-16, 2015 at the Hyatt Santa Clara and Santa Clara Convention Center – watch for more details in the coming months!

The post Percona Live MySQL Conference Highlights appeared first on MySQL Performance Blog.

Heartbleed: Separating FAQ From FUD

Latest MySQL Performance Blog posts - April 9, 2014 - 11:52am

If you’ve been following this blog (my colleague, David Busby, posted about it yesterday) or any tech news outlet in the past few days, you’ve probably seen some mention of the “Heartbleed” vulnerability in certain versions of the OpenSSL library.

So what is ‘Heartbleed’, really?

In short, Heartbleed is an information-leak issue. An attacker can exploit this bug to retrieve the contents of a server’s memory without any need for local access. According to the researchers that discovered it, this can be done without leaving any trace of compromise on the system. In other words, if you’re vulnerable, they can steal your keys and you won’t even notice that they’ve gone missing. I use the word “keys” literally here; by being able to access the contents of the impacted service’s memory, the attacker is able to retrieve, among other things, private encryption keys for SSL/TLS-based services, which means that the attacker would be able to decrypt the communications, impersonate other users (see here, for example, for a session hijacking attack based on this bug), and generally gain access to data which is otherwise believed to be secure. This is a big deal. It isn’t often that bugs have their own dedicated websites and domain names, but this one does: http://www.heartbleed.com

Why is it such a big deal?

One, because it has apparently been in existence since at least 2012. Two, because SSL encryption is widespread across the Internet. And three, because there’s no way to know if your keys have been compromised. The best detection that currently exists for this are some Snort rules, but if you’re not using Snort or some other IDS, then you’re basically in the dark.

What kinds of services are impacted?

Any software that uses the SSL/TLS features of a vulnerable version of OpenSSH. This means Apache, NGINX, Percona Server, MariaDB, the commercial version of MySQL 5.6.6+, Dovecot, Postfix, SSL/TLS VPN software (OpenVPN, for example), instant-messaging clients, and many more. Also, software packages that bundle their own vulnerable version of SSL rather than relying on the system’s version, which might be patched. In other words, it’s probably easier to explain what isn’t affected.

What’s NOT impacted?

SSH does not use SSL/TLS, so you’re OK there. If you downloaded a binary installation of MySQL community from Oracle, you’re also OK, because the community builds use yaSSL, which is not known to be vulnerable to this bug. Obviously, any service which doesn’t use SSL/TLS is not going to be vulnerable, either, since the salient code paths aren’t going to be executed. So, for example, if you don’t use SSL for your MySQL connections, then this bug isn’t going to affect your database server, although it probably still impacts you in other ways (e.g., your web servers).

What about Amazon cloud services?

According to Amazon’s security bulletin on the issue, all vulnerable services have been patched, but they still recommend that you rotate your SSL certificates.

Do I need to upgrade Percona Server, MySQL, NGINX, Apache, or other server software?

No, not unless you built any of these and statically-linked them with a vulnerable version of OpenSSL. This is not common. 99% of affected users can fix this issue by upgrading their OpenSSL libraries and cycling their keys/certificates.

What about client-level tools, like Percona Toolkit or XtraBackup?

Again, no. The client sides of Percona Toolkit, Percona XtraBackup, and Percona Cloud Tools (PCT) are not impacted. Moreover, the PCT website has already been patched. Encrypted backups are still secure.

There are some conflicting reports out there about exactly how much information leakage this bug allows. What’s the deal?

Some of the news articles and blogs claim that with this bug, any piece of the server’s memory can be accessed. Others have stated that the disclosure is limited to memory space owned by processes using a vulnerable version of OpenSSL. As far as we are aware, and as reported in CERT’s Vulnerability Notes Database, the impact of the bug is the latter; i.e., it is NOT possible for an attacker to use this exploit to retrieve arbitrary bits of your server’s memory, only bits of memory from your vulnerable services. That said, your vulnerable services could still leak information that attackers could exploit in other ways.

How do I know if I’m affected?

You can test your web server at http://filippo.io/Heartbleed/ – enter your site’s URL and see what it says. If you have Go installed, you can also get a command-line tool from Github and test from the privacy of your own workstation. There’s also a Python implementation. You can also check the version of OpenSSL that’s installed on your servers. If you’re running OpenSSL 1.0.1 through 1.0.1f or 1.0.2-beta, you’re vulnerable. (Side note here: some distributions, such as RHEL/CentOS, have patched the issue without actually updating the OpenSSL version number; on RPM-based systems you can view the changelog to see if it’s been fixed, for example:

rpm -q --changelog openssl | head -2 * Mon Apr 07 2014 Tomáš Mráz <tmraz@redhat.com> 1.0.1e-16.7 - fix CVE-2014-0160 - information disclosure in TLS heartbeat extension

Also, note that versions of OpenSSL prior to 1.0.1 are not known to be vulnerable. If you’re still unsure, we would be happy to assist you in determining the extent of the issue in your environment and taking any required corrective action. Just give us a call.

How can I know if I’ve been compromised?

If you’ve already been compromised, you won’t. However, if you use Snort as an IDS, you can use some rules developed by Fox-IT to detect and defer new attacks; other IDS/IPS vendors may have similar rule updates available. Without some sort of IDS in place, however, attacks can and will go unnoticed.

Are there any exploits for this currently out there?

Currently, yes, there are some proofs-of-concept floating around out there, although before this week, that was uncertain. But given that this is likely a 2-year-old bug, I would be quite surprised if someone, somewhere (you came to my talk at Percona Live last week, didn’t you? Remember what I said about assuming that you’re already owned?) didn’t have a solid, viable exploit.

So, then, what should I do?

Ubuntu, RedHat/CentOS, Amazon, and Fedora have already released patched versions of the OpenSSL library. Upgrade now. Do it now, as in right now. If you’ve compiled your own version of OpenSSL from source, upgrade to 1.0.1g or rebuild your existing source with the -DOPENSSL_NO_HEARTBEATS flag.

Once that’s been done, stop any certificate-using services, regenerate the private keys for any services that use SSL (Apache, MySQL, whatever), and then obtain a new SSL certificate from whatever certificate authority you’re using. Once the new certificates are installed, restart your services. You can also, of course, do the private key regeneration and certificate cycling on a separate machine, and only bring the service down long enough to install the new keys and certificates. Yes, you will need to restart MySQL. Or Apache. Or whatever. Full stop, full start, no SIGHUP (or equivalent).

Unfortunately, that’s still not all. Keeping in mind the nature of this bug, you should also change / reset any user passwords and/or user sessions that were in effect prior to patching your system and recycling your keys. See, for example, the session hijacking exploit referenced above. Also note that Google services were, prior to their patching of the bug, vulnerable to this issue, which means that it’s entirely possible that your Gmail login (or any other Google-related login) could have been compromised.

Can I get away with just upgrading OpenSSL?

NO. At a bare minimum, you will need to restart your services, but in order to really be sure you’ve fixed the problem, you also need to cycle your keys and certificates (and revoke your old ones, if possible). This is the time-consuming part, but since you have no way of knowing whether or not someone has previously compromised your private keys (and you can bet that now that the bug is public, there will be a lot of would-be miscreants out there looking for servers to abuse), the only safe thing to do is cycle them. You wouldn’t leave your locks unchanged after being burgled, would you?

Also note that once you do upgrade OpenSSL, you can get a quick list of the services that need to be restarted by running the following:

sudo lsof | grep ssl | grep DEL

Where can I get help and/or more information?

In addition to the assorted links already mentioned, you can read up on the nuts and bolts of this bug, or various news articles such as this or this. There are a lot of articles out there right now, most of which are characterizing this as a major issue. It is. Test your vulnerability, upgrade your OpenSSL and rotate your private keys and certificates, and then change your passwords.

The post Heartbleed: Separating FAQ From FUD appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
Contact Us 24 Hours A Day
Support Contact us 24×7
Emergency? Contact us for help now!
Sales North America (888) 316-9775 or
(208) 473-2904
Sales
Europe
+44-208-133-0309 (UK)
0-800-051-8984 (UK)
0-800-181-0665 (GER)
More Numbers
Training (855) 55TRAIN or
(925) 271-5054

 

Share This
]]>