]]>
]]>

Latest MySQL Performance Blog posts

You are here

Subscribe to Latest MySQL Performance Blog posts feed
Percona's MySQL & InnoDB performance and scalability blog
Updated: 1 hour 37 min ago

MySQL performance optimization: Don’t guess! Measure with Percona Cloud Tools

January 29, 2014 - 7:49am

In our practice we often see that MySQL performance optimization is done in a sort of “black magic” way. A common opinion is: “If there is a performance problem – then it must be a database issue, because what else could it be? And if this is a database issue, then it must involve IO problems because the reason for a slow database is always a slow IO…”  Following this logic might actually give a result, but achieving a fully successful resolution would require magic.

At Percona we use a different approach. Performance optimization should not be based on guesses, but exact measurements. In application to databases, I described previously, we use queries as units of work. And a proper measurement of these units is the first important task in performance optimization.

Let me list the metrics of what our Percona Cloud Tools provides:

  • Query count – How many times query was executed
  • Query_time – Total time that MySQL spent on query execution
  • Lock_time – Time spent in waiting on Table level locks
  • Rows_sent – How many rows query returned to application
  • Rows_examined – How many rows MySQL actually had to read

In my previous post I indicated that Rows_sent/Rows_examined ratio is one of interest for OLTP workloads.

And the following metrics are available only for Percona Server, and not because we crippled our tools, but because MySQL simply does not provide them. It is worth reminding that one of main goals in making Percona Server was to provide diagnostics and transparency.

Percona Server metrics:

  • Rows_affected
  • Merge_passes
  • InnoDB_IO_r_bytes
  • InnoDB_IO_r_ops
  • InnoDB_IO_r_wait
  • InnoDB_pages_distinct
  • InnoDB_queue_wait
  • InnoDB_rec_lock_wait
  • Query_length
  • Bytes_sent
  • Tmp_tables
  • Tmp_disk_tables
  • Tmp_table_sizes

Meaning of all these metrics are available from our docs page

Now, often it is not enough to provide a single value for a given metric, so the following stats are available:
Total, Average, Minimum, Median, 95%, Maximum and Stddev.
Add for each of these trending graphs and you will see a matrix on your query:

Having all of these comprehensive metrics about your queries, you can now make an intelligent decision on how to approach performance optimization of your database.

You can do it all by yourself today with the free Beta of Percona Cloud Tools.

I understand that all of this information might be too much too figure out at first, but as with any tool – it takes skills and some experience to analyze and apply metrics properly. To help to bootstrap with Percona Cloud Tools, I will be running a webinar “Analyze MySQL Query Performance with Percona Cloud Tools” on Feb-12, 2014; if you register and install the Percona Cloud Tools agent you may win a free consulting hour from me during which I will examined your queries and provide an advice for optimization. See all conditions there.

The post MySQL performance optimization: Don’t guess! Measure with Percona Cloud Tools appeared first on MySQL Performance Blog.

Looking to upgrade to MySQL 5.6? Check out my webinar on Jan 29!

January 27, 2014 - 6:58am

We are nearing the one-year anniversary since MySQL 5.6 went GA – which is typically a good time even for the most conservative users to start thinking about upgrading. At this point there is a fair amount of practical use and experience; many bugs have also been fixed (1991 to be exact according to Morgan Tocker).

We also know that MySQL 5.6 has been used in some very demanding environments on a very large scale, such as at Facebook. We also know from the Facebook team, after kindly sharing their upgrade experiences, that it takes a lot of work to upgrade to MySQL 5.6. That’s where Percona can help!

In my webinar this Wednesday, “Upgrading to MySQL 5.6: Best Practices,” I will share some of the issues and best practices we have discovered here at Percona in helping our customers upgrade to MySQL 5.6 – and also Percona Server 5.6, which is an enhanced, drop-in MySQL replacement. My webinar starts at 10 a.m. Pacific time. You can register here, and that URL will be available afterwards for the recording.

While we’re on the upgrade topic: MySQL 5.1 has reached EOL when it comes to MySQL support from Oracle. According to mysql.com, “Users are encouraged to upgrade to MySQL 5.6.,” and, “Per Oracle’s Lifetime Support policy, as of December 31, 2013, MySQL 5.1 is covered under Oracle Sustaining Support.” That means there will be no new releases, no new fixes (no error correction for new issues), no new updates.

Only existing updates, fixes and alerts are available – which means no more releases for MySQL 5.1 even if you run into crash-inducing bugs or the need for security fixes. The former can be especially worrying as MySQL 5.1 just got the some significant vulnerabilities fixed (more details) and chances are it will not take long for the next significant security issue to be discovered.

One solution to this problem is to upgrade to MySQL 5.6 or MySQL 5.5, which will be supported for awhile. That’s why Wednesday’s webinar is important to attend if you haven’t yet upgraded. You should also consider calling Percona to help in upgrading to MySQL 5.6 – especially if you do not have experience doing MySQL upgrades.

Another solution is to use Percona’s MySQL Support, which continues to cover MySQL 5.1 and Percona Server 5.1 and will provide fixes for crash-inducing bugs and security issues. I hope to see you on Wednesday!

The post Looking to upgrade to MySQL 5.6? Check out my webinar on Jan 29! appeared first on MySQL Performance Blog.

FOSDEM 2014 MySQL Devroom, Community Dinner in Brussels this Saturday!

January 26, 2014 - 10:00pm

This weekend, on 1-2 February, FOSDEM 2014 will take place in Brussels, Belgium.
There will be MySQL talks on Saturday in the MySQL Devroom and a MySQL Community dinner afterward.

30 people have already signed up for the community dinner, we’re almost at the amount of people from last year.

Last orders for the MySQL Community dinner will be accepted on Wednesday 29 January, so if you plan to attend, make sure to buy your tickets now. After that day, we won’t be able to add more people to the list.

I also wanted to thank the sponsors. Without them, this community dinner would not have been made possible:

The post FOSDEM 2014 MySQL Devroom, Community Dinner in Brussels this Saturday! appeared first on MySQL Performance Blog.

MySQL server memory usage troubleshooting tips

January 24, 2014 - 6:00am

There are many blog posts already written on topics related to “MySQL server memory usage,” but nevertheless there are some who still get confused when troubleshooting issues associated with memory usage for MySQL. As a Percona support engineer, I’m seeing many issues regularly related to heavy server loads – OR OOM killer got invoked and killed MySQL server due to high Memory usage… OR with a question like: “I don’t know why mysql is taking so much memory. How do I find where exactly memory is allocated? please help!”

There are many ways to check memory consumption of MySQL. So, I’m just trying here to explain it by combining all details that I know of in this post.

  • Check memory related Global/Session variables.

If you are using MyISAM then you need to check for Key_buffer_size, while using InnoDB, you can check innodb_buffer_pool_size,  innodb_additional_memory_pool_size, innodb_log_buffer_size,  innodb_sort_buffer_size (used only for sorting data while creating index in innodb, introduced from 5.6). max_connections, query_cache_size and table_cache are also important variables to check

We know that whenever a thread is connected to MySQL, it will need it’s own buffers when they are doing some complex operations like FTS,  sorting, creating temp tables etc. So we also need to check the size of read_buffer_size, sort_buffer_size, read_rnd_buffer_size and tmp_table_size.

There is a very good quote from High Performance MySQL, 3rd Edition: “ You can think of MySQL’s memory consumption as falling into two categories: the memory you can control, and the memory you can’t. You can’t control how much memory MySQL uses merely to run the server, parse queries, and manage its internals, but you have a lot of control over how much memory it uses for specific purposes.” So it seems we have to understand the purpose for configuring any variable… either it is Global or Session level. I would like to explain more about that here.

For the Global variables like key_buffer_size, query_cache_size etc,  MySQL always allocates and initializes the specified amount of memory all at once when the server starts. But it’s not happened for those who are global default but can be set as per-session variables, i.e  For read_buffer_size, sort_buffer_size, join_buffer_size, MySQL doesn’t allocate any memory for these buffers until query needs. But when a query needs, it immediately allocates the entire chunk of memory specified. So if there are even small sorts, full buffer size will be allocated which is just waste of memory. Even some buffers can be used multiple times. For example on queries that join several tables join_buffer can be allocated once per joined table. also some complicated queries including sub-queries can use multiple sort_buffers at the same time which can lead to high memory consumption. In some scenario, query didn’t even use sort_buffer whatever size is, as it select by primary key which will not allocate it. So it depends on the nature of your environment but I would say it’s always better to start with a safe variable value that can be larger than default if needed but not as large as it can consume all of the server’s memory.

One more thing,  not all per thread memory allocation is configured by variables.  Some of memory allocation per thread is done by MySQL itself for running complex processes/queries like “stored procedures” and it can take unlimited amount of memory while running. And sometimes, optimizer  can also take a lot of memory working with highly complex queries which generally we can’t control by any configuration parameter.

Even innodb_buffer_pool_size is not a hard limit, usually innodb uses 10% more memory than the one specified. Many people do not recommend using both storage engine MyISAM and InnoDB at the same time on production server. Because both have individual buffers which can eat all server memory.

For detailed information related to this topic, I would suggest reading this post from Peter Zaitsev titled “MySQL Server Memory Usage.”

  • Check “SHOW ENGINE INNODB STATUS” for section “BUFFER POOL AND MEMORY

---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 137363456; in additional pool allocated 0 Dictionary memory allocated 76056 Buffer pool size 8191 Free buffers 7804 Database pages 387 Old database pages 0 Modified db pages 0

Above one is from Native MySQL but if you’ll check the same with Percona Server you’ll get some more information.

---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 137756672; in additional pool allocated 0 Total memory allocated by read views 88 Internal hash tables (constant factor + variable factor) Adaptive hash index 2217584 (2213368 + 4216) Page hash 139112 (buffer pool 0 only) Dictionary cache 597885 (554768 + 43117) File system 83536 (82672 + 864) Lock system 333248 (332872 + 376) Recovery system 0 (0 + 0) Dictionary memory allocated 43117 Buffer pool size 8191 Buffer pool size, bytes 134201344 Free buffers 7760 Database pages 431 Old database pages 0 Modified db pages 0

This will give you information regarding how much memory is allocated by InnoDB. You can see here “Total Memory Allocated”, “Internal Hash Tables”, “Dictionary Memory Allocated”, “Buffer Pool Size” etc.

  • Profiling MySQL Memory usage with Valgrind Massif

Recently, I used this tool and surprisingly I got very good statistics about memory usage. Here the only problem is you have to shutdown the mysql, start it with valgrind massif and after collecting statistics, you again have to shutdown and normal start.

$ /etc/init.d/mysql stop $ valgrind --tool=massif --massif-out-file=/tmp/massif.out /usr/sbin/mysqld $ /etc/init.d/mysql restart

After getting massif.out file, you have to read it with ms_print command. You will see pretty nice graph and then statistics. i.e

[root@percona1 ~]# ms_print /tmp/massif.out -------------------------------------------------------------------------------- Command: /usr/sbin/mysqld Massif arguments: --massif-out-file=/tmp/massif.out ms_print arguments: /tmp/massif.out -------------------------------------------------------------------------------- MB 50.22^ ## | # | # | # | # | # | : # | :: # ::::::@:::::::::::::@:::@::::@:::@:::: | : @::# :::::@@::::::::::::::::::: @::: ::: :::::@:::@::: @:::@:::: | : @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@:::: | : @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@:::: | : @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@:::: | : @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@::::@ | : @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@::::@ | :: @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@::::@ | ::: @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@::::@ | :::: @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@::::@: | :::: @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@::::@: | :::: @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@::::@: | :::: @::# : ::@ :::::::::: :: ::::: @::: ::: :::::@:::@::: @:::@::::@: 0 +----------------------------------------------------------------------->Mi 0 575.9 Number of snapshots: 96 Detailed snapshots: [1, 7, 11 (peak), 16, 35, 48, 58, 68, 78, 88] -------------------------------------------------------------------------------- n time(i) total(B) useful-heap(B) extra-heap(B) stacks(B) -------------------------------------------------------------------------------- 0 0 0 0 0 0 1 6,090,089 195,648 194,590 1,058 0 99.46% (194,590B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc. ->48.79% (95,458B) 0x7A1D20: my_malloc (my_malloc.c:38) | ->25.08% (49,060B) 0x6594F1: read_texts(char const*, char const*, char const***, unsigned int) (derror.cc:160) | | ->25.08% (49,060B) 0x6597C2: init_errmessage() (derror.cc:69) | | ->25.08% (49,060B) 0x506232: init_common_variables() (mysqld.cc:3414) | | ->25.08% (49,060B) 0x508CBB: mysqld_main(int, char**) (mysqld.cc:4461) | | ->25.08% (49,060B) 0x5B2CD1B: (below main) (in /lib64/libc-2.12.so) | | | ->09.36% (18,317B) 0x789571: my_read_charset_file (charset.c:364) | | ->09.36% (18,317B) 0x789DEC: init_available_charsets (charset.c:458) | | ->09.36% (18,317B) 0x4E35D31: pthread_once (in /lib64/libpthread-2.12.so) | | ->09.36% (18,317B) 0x789C80: get_charset_by_csname (charset.c:644) | | ->09.36% (18,317B) 0x5062E9: init_common_variables() (mysqld.cc:3439) | | ->09.36% (18,317B) 0x508CBB: mysqld_main(int, char**) (mysqld.cc:4461) | | ->09.36% (18,317B) 0x5B2CD1B: (below main) (in /lib64/libc-2.12.so) | | | ->08.37% (16,384B) 0x79DEEF: my_set_max_open_files (my_file.c:105) | | ->08.37% (16,384B) 0x506169: init_common_variables() (mysqld.cc:3373) | | ->08.37% (16,384B) 0x508CBB: mysqld_main(int, char**) (mysqld.cc:4461) | | ->08.37% (16,384B) 0x5B2CD1B: (below main) (in /lib64/libc-2.12.so) | | | ->04.36% (8,536B) 0x788DB4: init_dynamic_array2 (array.c:70) | | ->02.45% (4,800B) 0x5CD51A: add_status_vars(st_mysql_show_var*) (sql_show.cc:2062) | | | ->02.45% (4,800B) 0x505E68: init_common_variables() (mysqld.cc:3245) | | | ->02.45% (4,800B) 0x508CBB: mysqld_main(int, char**) (mysqld.cc:4461) | | | ->02.45% (4,800B) 0x5B2CD1B: (below main) (in /lib64/libc-2.12.so) | | |

You can see from the output that where memory is allocated, to which function etc. You can use this tool to find memory leaks. You can get more information here for how to install and use it. Here’s another related post by Roel Van de Paar titled: “Profiling MySQL Memory Usage With Valgrind Massif.”

If possible Valgrind massif should not be used on busy production server as it can degrade the performance. Generally it’s used to find memory leak by creating mirror environment on test/stage server and run on it. It needs debug binary to run so it decreases performance a lot. So it can be used for investigating some cases but not for regular use.

  • Check Plot memory usage by monitoring ps output. 

This also useful when you want to check how much virtual(VSZ) and real memory (RSS) is used by mysqld. You can either simply run some bash script for monitoring it like

while true do   date >> ps.log   ps aux | grep mysqld >> ps.log   sleep 60 done
Or you can also check when needed from shell prompt with “ps aux | grep mysqld” command. 

  • Memory tables in MySQL 5.7

With MySQL 5.7, some very interesting memory statistics tables are introduced to check memory usage in performance_schema.  There is no any detailed documentation available yet but you can check some details here.  http://dev.mysql.com/doc/refman/5.7/en/memory-summary-tables.html

In P_S, there are five memory summary tables.

mysql> show tables like '%memory%'; +-----------------------------------------+ | Tables_in_performance_schema (%memory%) | +-----------------------------------------+ | memory_summary_by_account_by_event_name | | memory_summary_by_host_by_event_name | | memory_summary_by_thread_by_event_name | | memory_summary_by_user_by_event_name | | memory_summary_global_by_event_name | +-----------------------------------------+ 5 rows in set (0.00 sec)

So with every event you can get summarized memory consumption for a particular account, host, thread and user. While checking more, I found that there are around 209 different events to check. I have just tried to check one event related to join buffer size.

mysql> select * from memory_summary_by_account_by_event_name where SUM_NUMBER_OF_BYTES_ALLOC <> 0 and user = 'msandbox' and event_name = 'memory/sql/JOIN_CACHE' \G *************************** 1. row *************************** USER: msandbox HOST: localhost EVENT_NAME: memory/sql/JOIN_CACHE COUNT_ALLOC: 2 COUNT_FREE: 2 SUM_NUMBER_OF_BYTES_ALLOC: 524288 SUM_NUMBER_OF_BYTES_FREE: 524288 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 0 HIGH_COUNT_USED: 1 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 0 HIGH_NUMBER_OF_BYTES_USED: 262144 1 row in set (0.00 sec) mysql> show global variables like 'join%'; +------------------+--------+ | Variable_name | Value | +------------------+--------+ | join_buffer_size | 262144 | +------------------+--------+ 1 row in set (0.00 sec) mysql>

Here, COUNT_ALLOC, COUNT_FREE are aggregate the number of calls to malloc-like and free-like functions. SUM_NUMBER_OF_BYTES_ALLOC and SUM_NUMBER_OF_BYTES_FREE are indicate the aggregate size of allocated and freed memory blocks. CURRENT_COUNT_USED is the aggregate number of currently allocated blocks that have not been freed yet. CURRENT_NUMBER_OF_BYTES_USED is the aggregate size of currently allocated memory blocks that have not been freed yet. LOW_ and HIGH_ are low and high water marks corresponding to the columns. 

If you are aware about these scripts then, these are giving very good summary about overall server memory consumption as well as related to MySQL.

Like in output of pt-summary,

# Memory ##################################################### Total | 11.8G Free | 143.7M Used | physical = 11.6G, swap allocated = 4.0G, swap used = 0.0, virtual = 11.6G Buffers | 224.9M Caches | 6.2G Dirty | 164 kB UsedRSS | 4.8G

In output of pt-mysql-summary.

# Query cache ################################################ query_cache_type | OFF Size | 0.0 Usage | 0% HitToInsertRatio | 0% # InnoDB ##################################################### Version | 5.5.30-rel30.2 Buffer Pool Size | 4.0G Buffer Pool Fill | 35% Buffer Pool Dirty | 1% # MyISAM ##################################################### Key Cache | 32.0M Pct Used | 20% Unflushed | 0%

Conclusion: 

It is really important for us to know where MySQL allocates memory and how it affects the overall load on the MySQL server and performance. I have just tried here to describe a few ways but I still think that we should have some sort of script or something that can combine all of these results and gives us some truthful output of memory usage in MySQL.

The post MySQL server memory usage troubleshooting tips appeared first on MySQL Performance Blog.

Percona XtraBackup 2.1.7 is now available

January 24, 2014 - 12:50am

Percona is glad to announce the release of Percona XtraBackup 2.1.7 on January 24th, 2014. Downloads are available from our download site here and Percona Software Repositories.

This release is the current GA (Generally Available) stable release in the 2.1 series. Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases. Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases.

New Features

  • Percona XtraBackup has now been rebased on MySQL versions 5.1.73, 5.5.35, 5.6.15 and Percona Server versions 5.1.73-rel14.11 and 5.5.35-rel33.0.

Bugs Fixed

  • After being rebased on MySQL 5.6.11 Percona XtraBackup has been affected by the upstream bug #69780. Fixed by rebasing Percona XtraBackup on MySQL 5.6.15 which contains a fix for the upstream bug. Bug fixed #1203669.
  • Backup directory would need to be specified even for running the innobackupex with --help and --version options. Bug fixed #1223716.
  • When creating an incremental backup with the changed page tracking feature available in Percona Server, innobackupex would fail if the server had the ANSI_QUOTES SQL mode enabled. Bug fixed #1248331.
  • When innobackupex --galera-info is specified and Percona XtraDB Cluster is using GTID replication (version 5.6 only), innobackupex will execute FLUSH BINARY LOGS and then carry the current binary log as indicated in SHOW MASTER STATUS into the backup set. Bug fixed #1250375.
  • Percona XtraBackup did not roll back prepared XA transactions when applying the log. Which was a regression introduced with the original port of Percona XtraBackup patches to 5.6. Fixed by restoring code that has been lost in the port. Bug fixed #1254227.
  • Percona XtraBackup now uses libgcrypt built in randomization functions for setting the Initialization Vector. Bug fixed #1255300.
  • xtrabackup_56 didn’t support ALL_O_DIRECT option for innodb_flush_method in Percona Server 5.6. Bug fixed #1261877.

Other bugs fixed: #1255019, #1268325, #1265759, #1269694, #1271501.

Release notes with all the bugfixes for Percona XtraBackup 2.1.7 are available in our online documentation. All of Percona‘s software is open source and free, all the details of the release can be found in the 2.1.7 milestone at Launchpad. Bugs can be reported on the launchpad bug tracker.

The post Percona XtraBackup 2.1.7 is now available appeared first on MySQL Performance Blog.

Percona Server: Improve Scalability with Percona Thread Pool

January 23, 2014 - 2:28pm

By default, for every client connection the MySQL server spawns a separate thread which will process all statements for this connection. This is the ‘one-thread-per-connection’ model. It’s simple and efficient until some number of connections N is reached. After this point performance of the MySQL server will degrade, mostly due to various contentions caused by N threads that are trying to access shared resources: either system ones like CPU, IO, memory or MySQL specific: structures/locks/etc. To keep the system stable and avoid degradation in the performance we need to limit the number of active threads, and at the same time we do not want to limit number of the client connections. The ‘Thread Pool’ model helps us to achieve that. It allows mapping N client connections to M number of active threads (actually performing work) while demonstrate a smooth and stable throughput for the MySQL server.

There are several implementations of the Thread Pool model:
- Commercial: Oracle/MySQL provides thread_pool plugin as part of the Enterprise subscription
- Open Source: The MariaDB thread_pool developed by Vladislav Vaintroub

In Percona Server we have included the latter, yet we further enhanced and improved it.

To demonstrate how thread pool may help to improve scalability, we run sysbench/OLTP_RW workload up to 16,384 threads with the latest MySQL server, Percona Server and Percona Server with thread_pool setup for IO and CPU bound load on a Dell R720 server with 16Cores/32vCPU.

The current thread pool implementation of Percona server is built into the server, unlike Oracle’s commercial version which is implemented as a plugin. In order to enable Thread Pool with Percona Server, you simply need to specify ‘thread_handling=pool-of-threads’ in the my.cnf file (before startup/restart) and also adjust the number of thread_pool groups with the ‘thread_pool_size’ variable. You can do the latter after a server start. In our runs with thread_pool we used ‘thread_pool_size=36′.

IO bound: sysbench dataset 32 tables/12M rows each (~100GB), InnoDB buffer pool=25GB

In-memory/CPU bound: sysbench dataset 32 tables/12M rows each (~100GB), InnoDB buffer pool=100GB

As you can see in both scenarios above, after 1024 threads, the standalone server is not really capable with keeping throughput at the same level. However, with thread_pool enabled the throughput is quite stable and smooth up to 16384 client connections.

Conclusion: if you regularly go over > 512/1024 connections, it is definitely worth trying Percona’s thread pool implementation to protect your server from serious performance degradations due to server overload.

The post Percona Server: Improve Scalability with Percona Thread Pool appeared first on MySQL Performance Blog.

FOSDEM 2014 MySQL & Friends devroom

January 23, 2014 - 12:00am

FOSDEM is a free, yearly event that offers open source communities a place to meet, share ideas and collaborate. It takes place in Brussels, Belgium each February.

10 years ago now, the first MySQL devroom was organized there by David Axmark, marking the first time MySQL users got their own space to come together and discuss at the event. On this anniversary edition, Percona consultants will be present at the MySQL & Friends Community Booth, as well as giving talks in the devroom (full schedule):

* Frédéric Descamps at 12.35 with 15 Tips to improve your Galera Cluster
* Stéphane Combaudon at 15:05 with Handling failover with MySQL 5.6 and Global Transaction IDs
* Peter Boros at 17.35 with Practical sysbench

The MySQL & Friends devroom will end with its traditional Community Dinner, organized by Percona’s Liz, Kenny and Dimitri  and co-sponsored by Percona, Oracle and MariaDB. Registration for the dinner is MANDATORY this year, so don’t forget to reserve your spot !

Looking forward to seeing you all on Friday night during the traditional beer event opening FOSDEM 2014!

The post FOSDEM 2014 MySQL & Friends devroom appeared first on MySQL Performance Blog.

10 MySQL settings to tune after installation

January 22, 2014 - 8:46am

When we are hired for a MySQL performance audit, we are expected to review the MySQL configuration and to suggest improvements. Many people are surprised because in most cases, we only suggest to change a few settings even though hundreds of options are available. The goal of this post is to give you a list of some of the most critical settings.

We already made such suggestions in the past here on this blog, but things have changed a lot in the MySQL world since then!

Before we start…

Even experienced people can make mistakes that can cause a lot of trouble. So before blindly applying the recommendations of this post, please keep in mind the following items:

  • Change one setting at a time! This is the only way to estimate if a change is beneficial.
  • Most settings can be changed at runtime with SET GLOBAL. It is very handy and it allows you to quickly revert the change if it creates any problem. But in the end, you want the setting to be adjusted permanently in the configuration file.
  • A change in the configuration is not visible even after a MySQL restart? Did you use the correct configuration file? Did you put the setting in the right section? (all settings in this post belong to the [mysqld] section)
  • The server refuses to start after a change: did you use the correct unit? For instance, innodb_buffer_pool_size should be set in MB while max_connection is dimensionless.
  • Do not allow duplicate settings in the configuration file. If you want to keep track of the changes, use version control.
  • Don’t do naive math, like “my new server has 2x RAM, I’ll just make all the values 2x the previous ones”.
Basic settings

Here are 3 settings that you should always look at. If you do not, you are very likely to run into problems very quickly.

innodb_buffer_pool_size: this is the #1 setting to look at for any installation using InnoDB. The buffer pool is where data and indexes are cached: having it as large as possible will ensure you use memory and not disks for most read operations. Typical values are 5-6GB (8GB RAM), 20-25GB (32GB RAM), 100-120GB (128GB RAM).

innodb_log_file_size: this is the size of the redo logs. The redo logs are used to make sure writes are fast and durable and also during crash recovery. Up to MySQL 5.1, it was hard to adjust, as you wanted both large redo logs for good performance and small redo logs for fast crash recovery. Fortunately crash recovery performance has improved a lot since MySQL 5.5 so you can now have good write performance and fast crash recovery. Until MySQL 5.5 the total redo log size was limited to 4GB (the default is to have 2 log files). This has been lifted in MySQL 5.6.

Starting with innodb_log_file_size = 512M (giving 1GB of redo logs) should give you plenty of room for writes. If you know your application is write-intensive and you are using MySQL 5.6, you can start with innodb_log_file_size = 4G.

max_connections: if you are often facing the ‘Too many connections’ error, max_connections is too low. It is very frequent that because the application does not close connections to the database correctly, you need much more than the default 151 connections. The main drawback of high values for max_connections (like 1000 or more) is that the server will become unresponsive if for any reason it has to run 1000 or more active transactions. Using a connection pool at the application level or a thread pool at the MySQL level can help here.

InnoDB settings

InnoDB has been the default storage engine since MySQL 5.5 and it is much more frequently used than any other storage engine. That’s why it should be configured carefully.

innodb_file_per_table: this setting will tell InnoDB if it should store data and indexes in the shared tablespace (innodb_file_per_table = OFF) or in a separate .ibd file for each table (innodb_file_per_table= ON). Having a file per table allows you to reclaim space when dropping, truncating or rebuilding a table. It is also needed for some advanced features such as compression. However it does not provide any performance benefit. The main scenario when you do NOT want file per table is when you have a very high number of tables (say 10k+).

With MySQL 5.6, the default value is ON so you have nothing to do in most cases. For previous versions, you should set it to ON prior to loading data as it has an effect on newly created tables only.

innodb_flush_log_at_trx_commit: the default setting of 1 means that InnoDB is fully ACID compliant. It is the best value when your primary concern is data safety, for instance on a master. However it can have a significant overhead on systems with slow disks because of the extra fsyncs that are needed to flush each change to the redo logs. Setting it to 2 is a bit less reliable because committed transactions will be flushed to the redo logs only once a second, but that can be acceptable on some situations for a master and that is definitely a good value for a replica. 0 is even faster but you are more likely to lose some data in case of a crash: it is only a good value for a replica.

innodb_flush_method: this setting controls how data and logs are flushed to disk. Popular values are O_DIRECT when you have a hardware RAID controller with a battery-protected write-back cache and fdatasync (default value) for most other scenarios. sysbench is a good tool to help you choose between the 2 values.

innodb_log_buffer_size: this is the size of the buffer for transactions that have not been committed yet. The default value (1MB) is usually fine but as soon as you have transactions with large blob/text fields, the buffer can fill up very quickly and trigger extra I/O load. Look at the Innodb_log_waits status variable and if it is not 0, increase innodb_log_buffer_size.

Other settings

query_cache_size: the query cache is a well known bottleneck that can be seen even when concurrency is moderate. The best option is to disable it from day 1 by setting query_cache_size = 0 (now the default on MySQL 5.6) and to use other ways to speed up read queries: good indexing, adding replicas to spread the read load or using an external cache (memcache or redis for instance). If you have already built your application with the query cache enabled and if you have never noticed any problem, the query cache may be beneficial for you. So you should be cautious if you decide to disable it.

log_bin: enabling binary logging is mandatory if you want the server to act as a replication master. If so, don’t forget to also set server_id to a unique value. It is also useful for a single server when you want to be able to do point-in-time recovery: restore your latest backup and apply the binary logs. Once created, binary log files are kept forever. So if you do not want to run out of disk space, you should either purge old files with PURGE BINARY LOGS or set expire_logs_days to specify after how many days the logs will be automatically purged.

Binary logging however is not free, so if you do not need for instance on a replica that is not a master, it is recommended to keep it disabled.

skip_name_resolve: when a client connects, the server will perform hostname resolution, and when DNS is slow, establishing the connection will become slow as well. It is therefore recommended to start the server with skip-name-resolve to disable all DNS lookups. The only limitation is that the GRANT statements must then use IP addresses only, so be careful when adding this setting to an existing system.

Conclusion

There are of course other settings that can make a difference depending on your workload or your hardware: low memory and fast disks, high concurrency, write-intensive workloads for instance are cases when you will need specific tuning. However the goal here is to allow you to quickly get a sane MySQL configuration without spending too much time on changing non-essential MySQL settings or on reading documentation to understand which settings do matter to you.

The post 10 MySQL settings to tune after installation appeared first on MySQL Performance Blog.

Past, present and future of MySQL and variants: linux.conf.au 2014 video

January 22, 2014 - 3:00am

Early this month I spoke at linux.conf.au in Perth, Australia, where I presented “Past, Present and Future of MySQL and variants.”

Here’s my presentation in its entirety. Please let me know if you have any questions or comments. (linux.conf.au is widely regarded by delegates as one of the best community run Linux conferences worldwide.)

The post Past, present and future of MySQL and variants: linux.conf.au 2014 video appeared first on MySQL Performance Blog.

Beware of MySQL 5.6 server UUID when cloning slaves

January 21, 2014 - 7:18am

The other day I was working on an issue where one of the slaves was showing unexpected lag. Interestingly with only the IO thread running the slave was doing significantly more IO as compared to the rate at which the IO thread was fetching the binary log events from the master. A closer inspection showed that this had to do with the error log getting filled up with error messages apparently suggesting that the slave IO thread was disconnecting and reconnecting repeatedly.

Below is the type of error messages that were being generated and were filling up the error log:

2014-01-18 03:33:46 22921 [Note] Slave: received end packet from server, apparent master shutdown: 2014-01-18 03:33:46 22921 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'mysql-bin.000001' at position 4239 2014-01-18 03:33:46 22921 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information. 2014-01-18 03:33:46 22921 [Note] Slave: received end packet from server, apparent master shutdown: 2014-01-18 03:33:46 22921 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'mysql-bin.000001' at position 4239 2014-01-18 03:33:46 22921 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.

These error messages are quite vague and do not suggest the actual problem at all. The error messages appear to point out to the disconnection and reconnection of the IO thread. The rate at which these error messages were being generated was close to 20MB/s which was unnecessarily producing IO load.

Importance of a Unique MySQL 5.6 Server UUID

But anyway coming back to the problem. Looking at the output of running SHOW SLAVE HOSTS on the master showed what the problem was:

master [localhost] {msandbox} (test) > show slave hosts; +-----------+----------+-------+-----------+--------------------------------------+ | Server_id | Host | Port | Master_id | Slave_UUID | +-----------+----------+-------+-----------+--------------------------------------+ | 101 | SBslave1 | 19681 | 1 | 6c27ed6d-7ee1-11e3-be39-6c626d957cff | +-----------+----------+-------+-----------+--------------------------------------+ 1 row in set (0.00 sec) master [localhost] {msandbox} (test) > show slave hosts; +-----------+----------+-------+-----------+--------------------------------------+ | Server_id | Host | Port | Master_id | Slave_UUID | +-----------+----------+-------+-----------+--------------------------------------+ | 102 | SBslave2 | 19682 | 1 | 6c27ed6d-7ee1-11e3-be39-6c626d957cff | +-----------+----------+-------+-----------+--------------------------------------+ 1 row in set (0.00 sec)

The slave hosts for the UUID “6c27ed6d-7ee1-11e3-be39-6c626d957cff” were fluctuating between “SBslave1″ and “SBslave2″. So the actual problem was that there were two slaves running with the same server UUID and that was confusing the master. This was caused by the new slave “SBslave2″ being cloned from the slave “SBslave1″. The cloning process ended up copying the file “auto.cnf” which is present in the MySQL data directory. The auto.cnf file stores the server’s UUID which is used to uniquely identify a server. This is a new feature in MySQL 5.6 and is explained very well in the MySQL manual here.

So essentially when a server starts up, it reads the UUID value from the auto.cnf file. If the auto.cnf file is not present or the UUID value cannot be read, then the server generates a new UUID. When a new UUID is generated you should see a message similar to the following in the MySQL error log:

2014-01-18 03:44:43 27365 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 7c6858fa-7fea-11e3-84fa-6c626d957cff.

Conclusion

So the important lesson is to remember to not reuse the auto.cnf file when cloning a server, as the server UUID is an important identification of a particular server. This is used by the master for the purpose of server identification. I also believe that MySQL should show a different error message that is more pertinent to the actual error. The master could in theory send a different type of event to the slave when it notices one with a duplicate UUID.

The post Beware of MySQL 5.6 server UUID when cloning slaves appeared first on MySQL Performance Blog.

Percona is hiring a Go back-end engineer

January 20, 2014 - 6:24pm

We’re looking for a full-time web back-end developer to join our team and help continue building the Percona Cloud Tools (https://cloud.percona.com) back end. We use the Go language, so we would love to find a Go programmer, but since Go is a very new language, we’re looking for any great programmer who wants to learn and master Go–it’s fun and easy!
We understand that Go is a new technology, so you may not have a direct experience, but if you are looking to be an expert in it together with our team, and participate in the development of new generation tools for MySQL – please submit your application!

The post Percona is hiring a Go back-end engineer appeared first on MySQL Performance Blog.

Percona Replication Manager (PRM) now supporting 5.6 GTID

January 20, 2014 - 6:05am

Over the last few days, I integrated the MySQL 5.6 GTID version of the Percona Replication Manager (PRM) work of Frédéric Descamps, a colleague at Percona. The agent supports the GTID replication mode of MySQL 5.6 and if the master suffers a hard crash, it picks the slave having applied the highest transaction ID from the dead master. Given the nature of GTID-based replication, that causes all the other slaves to resync appropriately to their new master which is pretty cool and must yet be matched by the regular PRM agent.

For now, it is part of a separate agent, mysql_prm56, which may be integrated with the regular agent in the future. To use it, download the agent with the link above, the pacemaker configuration is similar to the one of the regular PRM agent. If you start from scratch, have a look here and of course, replace “mysql_prm” with “mysql_prm56″. Keep in mind that although it successfully ran many tests, it is the first release and there’s no field experience. I invite you to send any issue or successful usage to PRM-discuss.

As a side note, dealing with GTID based replication is slightly different than regular replication. I invite to consult these posts for more details:

Replication in MySQL 5.6: GTIDs benefits and limitations – Part 1

How to create/restore a slave using GTID replication in MySQL 5.6
How to create a new (or repair a broken) GTID based slave with Percona XtraBackup
Repair MySQL 5.6 GTID replication by injecting empty transactions

The post Percona Replication Manager (PRM) now supporting 5.6 GTID appeared first on MySQL Performance Blog.

keepalived with reader and writer VIPs for Percona XtraDB Cluster

January 17, 2014 - 12:00am

This is a followup to Jay Janssen’s October post, “Using keepalived for HA on top of Percona XtraDB Cluster.” We got a request recently where the customer has 2 VIPs (Virtual IP addresses), one for reader and one for a writer for a cluster of 3 nodes. They wanted to keep it simple, with low latency and does not require an external node resource like HaProxy would.

keepalived is a simple load balancer with HA capabilities, which means it can proxy TCP services behind it and at the same time, keep itself highly available using VRRP as failover mechanism. This post is about taking advantage of the VRRP capabilities built into keepalived to intelligently manage your PXC VIPs.

While Yves Trudeau also wrote a very interesting and somewhat similar solution using ClusterIP and Pacemaker to load balance VIPs, they have different use cases. Both solutions reduce latency from an external proxy or load balancer, but unlike ClusterIP, connections to the desired VIP with keepalived go to a single node which means a little less work for each node trying to see if they should respond to the request. ClusterIP is good if you want to send writes to all nodes in calculated distribution while with our keepalived option, each VIP at best assigned to only a single node – depending on your workload, each will have advantages and disadvantages.

The OS I used was CentOS 6.4 with keepalived 1.2.7 available in the yum repositories, however, it’s difficult to troubleshoot failover behavior with VRRP_Instance weights without seeing them from keepalived directly. So I used a custom build, with a patch for –vrrp-status option that allows me to monitor something like this:

[root@pxc01 keepalived]# keepalived --vrrp-status VRRP Instance : writer_vip Interface : eth5 Virtual Router ID : 60 State : BACKUP Virtual IP address : 192.168.56.83 Advertisement interval : 1 sec Preemption : Enabled, delay 0 secs Priority : 101 Effective Priority : 101 Authentication : NONE Master router : 192.168.56.44 priority 151 Master down interval : 3.6 VRRP Instance : reader_vip Interface : eth5 Virtual Router ID : 61 State : MASTER Virtual IP address : 192.168.56.84 Advertisement interval : 1 sec Preemption : Enabled, delay 0 secs Priority : 101 Effective Priority : 181 Authentication : NONE Master router : 192.168.56.42 (local)

So first, let’s compile keepalived from source, the Github branch here is where the status patch is available.

cd ~ git clone https://github.com/jonasj76/keepalived.git git checkout 5c5b2cc51760967c92b968d6e886ab6ecc2ee86d git branch 5c5b2cc51760967c92b968d6e886ab6ecc2ee86d ./configure make && make install

Install the customer tracker script below – because compiling keepalived above installs it on /usr/local/bin, I put this script there as well. One would note that this script is completely redundant, it’s true, but beware that keepalived does not validate its configuration, especially track_scripts so I prefer to have it on separate bash script so I can easily debug misbehavior. Of course when all is working well, you can always merge this to the keepalived.conf file.

#!/bin/bash # Modify these addresses to match your reader and writer VIPs WRITER_VIP=192.168.56.83 READER_VIP=192.168.56.84 # Make sure your clustercheck script also works PXC_CHECK='/usr/bin/clustercheck clustercheck password 0' SCRIPT=$1 WEIGHT=101 case $SCRIPT in 'bad_pxc') $PXC_CHECK || exit 1 ;; 'nopreempt_writer') [[ "$(hostname|cut -d'.' -f1)" != 'pxc01' && $(ip ad sh|grep $WRITER_VIP) && $(ip ad sh|grep $READER_VIP|grep -c inet) -eq 0 ]] || exit 1 ;; 'nopreempt_reader') [[ "$(hostname|cut -d'.' -f1)" != 'pxc02' && $(ip ad sh|grep $READER_VIP) && $(ip ad sh|grep $WRITER_VIP|grep -c inet) -eq 0 ]] || exit 1 ;; 'repel_writer') [ $(ip ad sh|grep $WRITER_VIP|grep -c inet) -eq 0 ] || exit 1 ;; esac exit 0

And below is my /etc/keepalived.conf:

vrrp_script nopreempt_writer_vip { script "/usr/local/bin/pxc-track nopreempt_writer" interval 2 } vrrp_script nopreempt_reader_vip { script "/usr/local/bin/pxc-track nopreempt_reader" interval 2 } vrrp_script repel_writer_vip { script "/usr/local/bin/pxc-track repel_writer" interval 2 } vrrp_script bad_pxc { script "/usr/local/bin/pxc-track bad_pxc" interval 2 } vrrp_instance writer_vip { interface eth5 state BACKUP virtual_router_id 60 priority 101 virtual_ipaddress { 192.168.56.83 } track_script { nopreempt_writer_vip weight 50 bad_pxc weight -100 } track_interface { eth5 } notify_master "/bin/echo 'writer now master' > /tmp/keepalived-w.state" notify_backup "/bin/echo 'writer now backup' > /tmp/keepalived-w.state" notify_fault "/bin/echo 'writer now fault' > /tmp/keepalived-w.state" } vrrp_instance reader_vip { interface eth5 state BACKUP virtual_router_id 61 priority 101 virtual_ipaddress { 192.168.56.84 } track_script { repel_writer_vip weight 30 nopreempt_reader_vip weight 50 bad_pxc weight -100 } track_interface { eth5 } ! This does not work properly if we stop the MySQL process ! VIP seems to stick on the node so we have separate nopreempt_* track_scripts !nopreempt notify_master "/bin/echo 'reader now master' > /tmp/keepalived-r.state" notify_backup "/bin/echo 'reader now backup' > /tmp/keepalived-r.state" notify_fault "/bin/echo 'reader now fault' > /tmp/keepalived-r.state" }

There are a number of things you can change here like remove or modify the notify_* clauses to fit your needs or send SMTP notifications during VIP failovers. I also prefer the initial state of the VRRP_Instances to be on BACKUP instead of master and let the voting on runtime dictate where the VIPs should go.

The configuration ensures that the reader and writer will not share a single node if more than one is available in the cluster. Even though the writer VIP prefers pxc01 in my example, this does not really matter much and only makes a difference when the reader VIP is not in the picture, there is no automatic failback with the help of the nopreempt_* track_scripts.

Now, to see it in action, after starting the cluster and keepalived in order pxc01, pxc02, pxc03, I have these statuses and weights:

[revin@forge ~]$ for h in pxc01 pxc02 pxc03; do echo "$h $(ssh root@$h 'cat /tmp/keepalived-w.state /tmp/keepalived-r.state'|xargs)"; done pxc01 writer now master reader now backup pxc02 writer now backup reader now master pxc03 writer now backup reader now backup pxc01 2014-01-15_20_58_23 writer_vip 161 reader_vip 101 pxc02 2014-01-15_20_58_28 writer_vip 101 reader_vip 131 pxc03 2014-01-15_20_58_36 writer_vip 131 reader_vip 131

The writer is on pxc01 and reader on pxc02 – even though the reader VIP score between pxc02 and pxc03 matches, it remains on pxc02 because of our nopreempt_* script. Let’s see what happens if I stop MySQL on pxc02:

[revin@forge ~]$ for h in pxc01 pxc02 pxc03; do echo "$h $(ssh root@$h 'cat /tmp/keepalived-w.state /tmp/keepalived-r.state'|xargs)"; done pxc01 writer now master reader now backup pxc02 writer now backup reader now backup pxc03 writer now backup reader now master pxc01 2014-01-15_21_01_17 writer_vip 161 reader_vip 101 pxc02 2014-01-15_21_01_24 writer_vip 31 reader_vip 31 pxc03 2014-01-15_21_01_36 writer_vip 101 reader_vip 181

The reader VIP moved to pxc03 and the weights changed, pxc02 reader dropped by 100 and on pxc03 it gained by 50 – again we set this higher for nor preempt. Now let’s stop MySQL on pxc03:

[revin@forge ~]$ for h in pxc01 pxc02 pxc03; do echo "$h $(ssh root@$h 'cat /tmp/keepalived-w.state /tmp/keepalived-r.state'|xargs)"; done pxc01 writer now master reader now master pxc02 writer now backup reader now backup pxc03 writer now backup reader now backup pxc01 2014-01-15_21_04_43 writer_vip 131 reader_vip 101 pxc02 2014-01-15_21_04_49 writer_vip 31 reader_vip 31 pxc03 2014-01-15_21_04_56 writer_vip 31 reader_vip 31

All our VIPs are now on pxc01, let’s start MySQL on pxc02:

[revin@forge ~]$ for h in pxc01 pxc02 pxc03; do echo "$h $(ssh root@$h 'cat /tmp/keepalived-w.state /tmp/keepalived-r.state'|xargs)"; done pxc01 writer now master reader now backup pxc02 writer now backup reader now master pxc03 writer now backup reader now backup pxc01 2014-01-15_21_06_41 writer_vip 161 reader_vip 101 pxc02 2014-01-15_21_06_50 writer_vip 101 reader_vip 131 pxc03 2014-01-15_21_06_55 writer_vip 31 reader_vip 31

Our reader is back to pxc02 and writer remains intact. When both VIPs end up on a single node i.e. last node standing, and a second node comes up, the reader moves not the writer this is to prevent any risks in breaking any connections that may be writing to the node currently owning the VIP.

The post keepalived with reader and writer VIPs for Percona XtraDB Cluster appeared first on MySQL Performance Blog.

FOSDEM 2014: MySQL And Friends Community Dinner

January 16, 2014 - 11:04am

FOSDEM is coming to town again here in Brussels this February 1-2, and as usual, there is a MySQL Devroom. Traditionally, a MySQL and Friends community dinner is also held during the event and this year is no different – we’ve organized one for Saturday, February 1 from 8:30 p.m. to 11:30 p.m. (CET).

If you want to sign up for the dinner, go to http://fosdemmysqlcommunitydinner2014.eventbrite.com/

This year’s edition is going to be slightly different…

We have rented a private cafeteria at a company called Mundo-B, where catering will be served by Kamilou.

The whole place is ours – there will be high tables and normal tables, and plenty of time and space to socialize and meet people, even the ones that are sitting 3 chairs away from you!

There’s also no need to wait for your food anymore. There will be a 3-course buffet and drinks are included! We’ve ensured that vegetarians also have a choice. Gone is all the hassle of deciphering bills and calculating how much money you’ve spent.

We’ve requested a typically Belgian menu to be put together for the occasion. Here is what you can expect:

Starters:

- Brussels Sprouts Salad with Mustard
- Chicory and apple salad

Mains:

- Beef stew with Grimbergen beer
- “Waterzooi” with fish, Flemish style
- Vegetarian option (to be announced)
- Fries, salad and bread included

Desserts:

- Cheesecake with “speculoos”
- Chocolate fondant

Available drinks (included):

- Choice of Belgian Beers
- Red Wine / White Wine
- Sparkling Water / Still Water

An event like this would not be possible without a very generous list of sponsors who helped us greatly in bringing the price down for each attendee and allowing us to organize it at the listed ticket price – including our own Percona, along with Oracle and MariaDB.

On that subject, we would like to point out that we will not be able to get an exact fix on the price until we have a full list of attendees, so with that in mind, the price we’ve set is based on the assumption of a certain amount of people attending. If the amount of people ends lower than our target, any remaining funds will be donated to the Fosdem organization.

The reason we charge money up front is two-fold:

1) We need to be able to give the caterer a rough estimate of the amount of attendees ahead of time. Free signups tend to make that number a bit less reliable. We want to make sure there is enough food to go around for everyone attending!

2) Stress-free financials on the night of the dinner! We all know the effects of Belgian Beer on arithmetic skills. Flash your ticket, get your food.

We’re looking forward to meeting you all again at Fosdem and the Community Dinner. See you then!

Party-Squad - Lizz, Dim0, Kenny

Generously sponsored by:

Wondering how to get there from Fosdem?

The venue itself is located very close to the Royal Palace. You can find the route to get there right here.

The total distance from the ULB campus is about 3.3km, so you could walk there, but it might be more comfortable to take the bus.

- Bus 71 departs from the “ULB” or the “Jeanne” stops regularly, and will take you up to “Matonge”, where you need to get out, and walk the remaining 350m.

The post FOSDEM 2014: MySQL And Friends Community Dinner appeared first on MySQL Performance Blog.

Analyzing WordPress MySQL queries with Query Analytics

January 16, 2014 - 5:00am

This blog, MySQLPerformanceBlog.com, is powered by WordPress, but we never really looked into what kind of queries to MySQL are used by WordPress. So for couple months we ran a Query Analytics (part of Percona Cloud Tools) agent there, and now it is interesting to take a look on queries. Query Analytics uses reports produced by pt-query-digest, but it is quite different as it allows to see trends and dynamics of particular query, in contrast to pt-query-digest, which is just one static report.

Why looking into queries important? I gave an intro in my previous post from this series.

So Query Analytics give the report on the top queries. How to detect which query is “bad”?
One of metrics I am typically looking into is ratio of “Rows examined” to “Rows sent”. In OLTP workload
I expect “Rows sent” to be close to “Rows examined”, because otherwise it means that a query handles a lot of rows (“examined”) which are not used in final result set (“sent”), and it means wasted CPU cycles and even unnecessary IOs if rows are not in memory.

Looking on WordPress queries it does not take long to find one:

This one actually looks quite bad… It examines up to ~186000 rows to return 0 or in the best case 1 row.
The full query text is (and this is available in Query Analytics, you do not need to dig through logs to find it):

SELECT comment_ID FROM wp_comments WHERE comment_post_ID = '154' AND comment_parent = '0' AND comment_approved != 'trash' AND ( comment_author = 'poloralphlauren.redhillrecords' OR comment_author_email = 'spam@gmail.com' ) AND comment_content = 'Probabilities are in case you are like the ma spam jorityof people nowadays, you\'re f lululemonaddictoutletcanadaSale.angelasauceda ighting tooth and nail just looking to keep up together with your existence. Acquiring organized can help you win the fight. Appear to the ugg factors in just abo spam ut every of your spaces (desk, workplace, living room, bed' LIMIT 1;

We can see how execution time of this query changes overtime

and also how many rows it examines for the last month

It is clearly an upward trend, and obviously the query does not scale well as there more and more data.
I find these trending graphs very useful and they are available in Query Analytics as we continuously digest and analyze queries. We can see that only for the last month amount of rows this query examines increased from ~130K to ~180K.

So, the obvious question is how to optimize this query?

We look into the explain plan

+----+-------------+-------------+------+----------------------------------------------------------+-----------------+---------+-------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+------+----------------------------------------------------------+-----------------+---------+-------+--------+-------------+ | 1 | SIMPLE | wp_comments | ref | comment_post_ID,comment_approved_date_gmt,comment_parent | comment_post_ID | 8 | const | 188482 | Using where | +----+-------------+-------------+------+----------------------------------------------------------+-----------------+---------+-------+--------+-------------+

and SHOW CREATE TABLE

CREATE TABLE `wp_comments` ( `comment_ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `comment_post_ID` bigint(20) unsigned NOT NULL DEFAULT '0', `comment_author` tinytext NOT NULL, `comment_author_email` varchar(100) NOT NULL DEFAULT '', `comment_author_url` varchar(200) NOT NULL DEFAULT '', `comment_author_IP` varchar(100) NOT NULL DEFAULT '', `comment_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00', `comment_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00', `comment_content` text NOT NULL, `comment_karma` int(11) NOT NULL DEFAULT '0', `comment_approved` varchar(20) NOT NULL DEFAULT '1', `comment_agent` varchar(255) NOT NULL DEFAULT '', `comment_type` varchar(20) NOT NULL DEFAULT '', `comment_parent` bigint(20) unsigned NOT NULL DEFAULT '0', `user_id` bigint(20) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (`comment_ID`), KEY `comment_post_ID` (`comment_post_ID`), KEY `comment_approved_date_gmt` (`comment_approved`,`comment_date_gmt`), KEY `comment_date_gmt` (`comment_date_gmt`), KEY `comment_parent` (`comment_parent`) )

Obviously WordPress did not design this schema to handle 180000 comments to a single post.
There are several ways to fix it, I will take the easiest way and change the key
KEY comment_post_ID (comment_post_ID)
to
KEY comment_post_ID (comment_post_ID,comment_content(300))

and it changes execution plan to

+----+-------------+-------------+------+----------------------------------------------------------------------------------------------------------------+-------------------+---------+-------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+------+----------------------------------------------------------------------------------------------------------------+-------------------+---------+-------------+------+-------------+ | 1 | SIMPLE | wp_comments | ref | comment_post_ID,comment_approved_date_gmt,comment_parent | comment_post | 910 | const,const | 1 | Using where | +----+-------------+-------------+------+----------------------------------------------------------------------------------------------------------------+-------------------+---------+-------------+------+-------------+

From 186000 rows to 910 rows – that’s quite improvement!

How does it affect execution time? Let’s query run for a while and see again in our trending graph:

The drop from ~600ms to ~34ms

and for Rows examined:

The 2nd query is also not to hard to find, and it is again on wp_comments table

The query examines up to 16K rows, sending only 123 in the best case.

Query text is (this one is from different instance of WordPress, so the table structure is different)

SELECT comment_post_ID FROM wp_comments WHERE LCASE(comment_author_email) = 'spam@gmail.com' AND comment_subscribe='Y' AND comment_approved = '1' GROUP BY comment_post_ID

and EXPLAIN for this particular one

+----+-------------+-------------+------+---------------------------+---------------------------+---------+-------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+------+---------------------------+---------------------------+---------+-------+------+----------------------------------------------+ | 1 | SIMPLE | wp_comments | ref | comment_approved_date_gmt | comment_approved_date_gmt | 62 | const | 6411 | Using where; Using temporary; Using filesort | +----+-------------+-------------+------+---------------------------+---------------------------+---------+-------+------+----------------------------------------------+

This table structure is

CREATE TABLE `wp_comments` ( `comment_ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `comment_post_ID` bigint(20) unsigned NOT NULL DEFAULT '0', `comment_author` tinytext NOT NULL, `comment_author_email` varchar(100) NOT NULL DEFAULT '', `comment_author_url` varchar(200) NOT NULL DEFAULT '', `comment_author_IP` varchar(100) NOT NULL DEFAULT '', `comment_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00', `comment_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00', `comment_content` text NOT NULL, `comment_karma` int(11) NOT NULL DEFAULT '0', `comment_approved` varchar(20) NOT NULL DEFAULT '1', `comment_agent` varchar(255) NOT NULL DEFAULT '', `comment_type` varchar(20) NOT NULL DEFAULT '', `comment_parent` bigint(20) unsigned NOT NULL DEFAULT '0', `user_id` bigint(20) unsigned NOT NULL DEFAULT '0', `comment_reply_ID` int(11) NOT NULL DEFAULT '0', `comment_subscribe` enum('Y','N') NOT NULL DEFAULT 'N', `openid` tinyint(1) NOT NULL DEFAULT '0', PRIMARY KEY (`comment_ID`), KEY `comment_post_ID` (`comment_post_ID`), KEY `comment_date_gmt` (`comment_date_gmt`), KEY `comment_approved_date_gmt` (`comment_approved`,`comment_date_gmt`), KEY `comment_parent` (`comment_parent`) )

There again several ways how to make the query more optimal, but let’s make a little exercise: Please
propose your solution in comments, and for the one I like the most by the end of February, I will give my personal discount code to Percona Live MySQL Conference and Expo 2014

So in the conclusion:

  • Query Analytics from Percona Cloud Tools gives immediate insight as to which query requires attention
  • With continuously running reports we can see the trends and effects from our optimizations

Please also note, that Query Analytics does not require you to install MySQL proxy, some third-party middle-ware or any kind of tcp-traffic analyzers. It fully operates with slow-log generated by MySQL, Percona Server or MariaDB (Percona Server provides much more information in slow-log than vanilla MySQL).
So try Percona Cloud Tools for free while it’s still in beta. You’ll be up and running minutes!

Actually, Would you like me to take look on your queries and provide my advice? It is possible.
I will be running a webinar titled “Analyze MySQL Query Performance with Percona Cloud Tools” on Feb-12, 2014; please register and see conditions.

The post Analyzing WordPress MySQL queries with Query Analytics appeared first on MySQL Performance Blog.

Percona Cloud Tools January 2014 status update

January 16, 2014 - 12:00am

Percona Cloud Tools, a new service currently in beta, applies Percona’s MySQL expertise to identify the best opportunities for application performance improvement.

The first component of Percona Cloud Tools that’s now available is Query Analytics. Based on pt-query-digest, Query Analytics enables DBAs and application developers to analyze MySQL queries over long periods of time and shines a spotlight on problems. One could say it’s “continuous slow log analysis through the web… as-a-Service” but that would be quite a long acronym. Percona Cloud Tools can help you:

  • Ensure queries perform well and with expected response times
  • Find and analyze the slowest, most problematic queries
  • Measure gains in query performance after implementing changes
  • Get low level metrics details and other data for each query

Join the other testers that are improving the performance of their applications and sign up for the beta today. Of course, it’s free.
Even with the December holidays the Percona Cloud Tools team was hard at work. Here are some of the ways we’ve recently improved the service.

New Features
Evolving over many months, we are now introducing a refined Query Analytics user experience to efficiently analyze MySQL query metrics. This is made of up three primary views.

  • Performance – The Performance view delivers at-a-glance metrics about MySQL performance. These include Queries Per Second (QPS), load, 95% response time, and others. Like the dashboard of a car, this data is vital but minimal. It’s useful when everything should be ok and you’re just taking a quick look to make sure that nothing has hit a wall.
  • Historical – The Historical view looks at the bigger picture and is one of the ways that Percona Cloud Tools is quite unique. It’s the history of metrics data and is very useful for trending, spotting problems, and seeing if changes in the past helped or hurt query performance. For example, not only can you view the most recent Query Response Time values but you can also view values over the past weeks.
  • Report – The Report view delivers “the facts” about actual queries and is easily accessible from the Performance and Historical views. Just move your mouse over a server name anywhere in the application to see a query report. For example, if your Performance view shows that MySQL is having problems, getting to the Report view to see the bad queries is simple and fast.

So not only are we making users lives easier through better data organization, the new Percona Cloud Tools use experience enables you to quickly navigate to the specifics of any problem.

Bugs Fixed
Not only have we rolled out a new user experience, we’ve also addressed some bugs along the way.

  • An existing agent could fail to start, receiving a 500 error from the API.
  • Max values were shown for aggregated “QPS” and “Total” periods instead of average for “QPS” and sum for “Total”.
  • pt-agent would not apply new MySQL connection options without restarting

Thank you to all of our beta testers for using Percona Cloud Tools and providing feedback. Your suggestions and bug reports have been very helpful.

Try Percona Cloud Tools for free while it’s still in beta.  You’ll be up and running minutes! Having troubles setting up the service? We’re here to help. Follow the “Feedback” link after you sign in and tell us about the problem.

Finally, on February 12th, Vadim Tkachenko (CTO of Percona) will show you how to Analyze MySQL Query Performance with Percona Cloud Tools. As a bonus, Percona will raffle five (5) 60-minute MySQL query consulting sessions with Vadim to analyze your Percona Cloud Tools Query Analytics data and provide feedback and performance suggestions.  Register now to attend this free webinar!

The post Percona Cloud Tools January 2014 status update appeared first on MySQL Performance Blog.

Percona Live MySQL Conference Sessions Announced

January 15, 2014 - 5:00am

I’m very pleased to announce that the breakout session and tutorial schedules for the third annual Percona Live MySQL Conference and Expo 2014 are now online. The conference will be at The Hyatt Regency Santa Clara and Santa Clara Convention Center, April 1-4, 2014. I wish to personally thank our dedicated Conference Committee members who put in many hours evaluating hundreds of submissions in order to create these schedules.

Breakout Session Schedule

We have a strong and diverse schedule of breakout sessions which will take place April 2-4, following each morning’s keynote addresses. The speaker list includes top MySQL practitioners from some of the world’s leading MySQL vendors and users including Oracle, MariaDB, Facebook, Twitter, Google, LinkedIn, Box, Groupon, and Yahoo. Topics include the following and much, much more:

  • “MySQL 5.7: Core Server Improvements,” Morgan Tocker and Rune Humborstad, Oracle
  • “MySQL 5.6 at Facebook, 2014 Edition,” Yoshinori Matsunobu, Facebook
  • “MySQL 5.7: InnoDB – What’s New,” Sunny Bains, Oracle
  • “Scaling Twitter with MySQL,” Calvin Sun, Twitter
  • “Easy MySQL Replication Setup and Troubleshooting,” Bob Burgess, Salesforce Marketing Cloud
  • “Sharding and Scale-out using MySQL Fabric,” Mats Kindahl, Oracle
  • “Privacy and Security for MySQL at Google in the Snowden Age,” Ian Gulliver, Google
  • “MySQL 5.7: Performance Schema Improvements,” Mark Leith, Oracle
  • “Small Data and MySQL,” Domas Mituzas, Facebook
  • “Hadoop for MySQL People,” Chris Schneider, Groupon.com
  • “InnoDB: A Journey to the Core II,” Davi Arnaut, LinkedIn, and Jeremy Cole, Google
  • “High Availability Tools for use with MySQL,” Jay Janssen, Percona
  • “MySQL 5.7 Replication: A Close Look at What Is New,” Luis Soares, Oracle
  • “Performance Monitoring and Tuning MySQL at Yahoo,” Ashwin Nellore, Xiang Rao, Yahoo
  • “The MySQL NoSQL Wars: Can’t we all just get along?” Tamar Bercovici, Box
  • “MariaDB 10.0: What’s New,” Sergei Golubchik, SkySQL
  • “Virtually Available MySQL, or How to Stop Worrying and Love the Cloud,” Robert Hodges, Continuent
  • “MySQL 5.7: Performance & Scalability Benchmarks,” Dimitri Kravtchuk, Oracle
  • “Asynchronous MySQL: How Facebook Queries Databases,” Chip Turner, Facebook
  • “Architecture and Design of MySQL Powered Applications,” Peter Zaitsev, Percona
  • “MySQL Workload Replay,” Ashwin Nellore, Xiang Rao, Yahoo

Tutorial Schedule

The Percona Live MySQL Conference Committee has also selected a strong agenda of tutorials which will take place on April 1. Tutorials provide practical, in-depth knowledge of critical MySQL issues. Topics include:

  • “MySQL Replication: Advanced Features in all flavors,” Giuseppe Maxia, Continuent
  • “Hands On Trove: Database as a Service in OpenStack (for MySQL),” Florian Haas, hastexo
  • “MariaDB 10 – The complete tutorial,” Ivan Zoratti, Colin Charles, SkySQL Ab, MariaDB
  • “MySQL Break/Fix Lab,” René Cannaò, PalominoDB
  • “InnoDB Architecture and Performance Optimization,” Peter Zaitsev, Percona
  • “Percona XtraDB Cluster / Galera in Practice – 2014 edition,” Jay Janssen, Percona
  • “Deep diving into MySQL monitoring setup,” Roman Vynar, Akshay Suryawanshi, Michael Rikmas, Percona
  • “Introduction to MongoDB for MySQL Users,” Stephane Combaudon, Percona
  • “From Relational to Hadoop – Migrating your data pipeline,” Gwen Shapira, Zburivsky Danil, Cloudera, Pythian
  • “Using TokuDB: A Guided Walk Through a TokuDB Implementation,” Tim Callaghan, Tokutek

Early Bird Registration

Register now to take advantage of Early Bird Registration rates which can save you $200 on the cost of attending. Early Bird Registration for the Percona Live MySQL Conference 2014 ends on February 2.

Birds of a Feather Sessions

I once again expect a great lineup of Birds of a Feather sessions (BOFs) so attendees with interests in the same project or topic can enjoy some quality face time. We are currently accepting Birds of a Feather session proposals which can be organized for individual projects or broader topics (e.g., best practices, open data, standards). The deadline for submissions is January 31, 2014 and any Percona Live MySQL Conference attendee or conference speaker can propose and moderate a session.

Lightning Talks

The sometimes insightful, sometimes funny, always entertaining Lightning Talks are an opportunity for presenters to propose, explain, exhort, or rant on any MySQL-related topic for five minutes. Topics might include a new idea, successful project, cautionary story, quick tip, or demonstration. All submissions will be reviewed and the top 10 will be selected to present during the one-hour Lightning Talks session during the Wednesday night Community Networking Reception. We especially welcome lighthearted, fun or otherwise entertaining submissions. The deadline for submitting a Lightning Talk topic is January 31, 2014.

Save Money by Registering Your Room at the Hyatt Now

The Hyatt Regency Santa Clara is the best place to stay and enjoy all the conference activities including receptions, Birds of a Feather sessions, and Lightning talks. The Hyatt bar area is often packed with conference goers throughout the afternoon and long into the evening. Make your reservation now through the Percona Live MySQL Conference and Expo 2014 website to receive the special rate before our conference room block fills up and rates increase.

Sponsors

We have a great list of sponsors who have already signed up for the Percona Live MySQL Conference and Expo. The list includes many of the world’s top MySQL solution providers, including:

Diamond Plus Sponsors

  • Continuent
  • Fusion-io

Gold Sponsors

  • Pythian
  • Micron
  • SkySQL

Silver Sponsors

  • Yelp
  • AppDynamics
  • Parelastic
  • Box
  • Galera/Codership
  • Google
  • InfiniDB (Calpont)

I Hope to See You in Santa Clara

Visit the Percona Live MySQL Conference and Expo 2014 website for more information about the conference. If you are interested in upcoming announcements, you can also register to receive email updates about the conference. Keep an eye on the conference website for upcoming information on our strong group of keynote sponsors and for the lineup of BOFs and Lightning Talks. 2014 promises to once again be a great year at the Percona Live MySQL Conference and Expo!

The post Percona Live MySQL Conference Sessions Announced appeared first on MySQL Performance Blog.

Upcoming Webinar: What’s new in Percona XtraDB Cluster 5.6

January 15, 2014 - 12:00am

I’ve been blogging a lot about some of the new things you can expect with Percona XtraDB Cluster 5.6 and Galera 3.x – and GA is coming soon.

To get prepared, I’ll be giving a webinar on February 5th at 1PM EST/10AM PST to talk about some of the big new features, upgrading strategies, as well as answering your questions. Alexey Yurchenko, solutions architect from Codership, will be joining me for some extra brain power.

Topics to be covered include:

  • Galera 3 replication enhancements
  • WAN segments
  • Cluster integration with 5.6 Async replication GTIDs
  • Async to cluster inbound replication enhancements
  • Minimal replication images
  • Detecting Donor IST viability
  • Upgrade paths to Percona XtraDB Cluster 5.6

You can register for free right here!

The post Upcoming Webinar: What’s new in Percona XtraDB Cluster 5.6 appeared first on MySQL Performance Blog.

InnoDB file formats: Here is one pitfall to avoid

January 14, 2014 - 6:00am

Compressed tables is an example of an InnoDB feature that became available with the “Barracuda” file format, released with InnoDB 1.1. They can bring significant gains in raw performance and scalability – given the data is stored in a compressed format the amount of memory and disk space necessary to hold it and move it around (disk/memory) is lower, thus making them attractive for servers equipped with SSD drives of smaller capacity.

Note that the notion of “file formats” (defined by the variable innodb_file_format) was introduced in MySQL 5.5: the evolution of InnoDB has lead to the development of new features and some of them required the support of new on-disk data structures not yet available. That means those particular features (like compressed tables) will only work with the new file format. To make things clear and help manage compatibility issues when upgrading and (specially) downgrading MySQL the original file format started being reffered to as “Antelope.”

The default file format in MySQL 5.6 and the latest 5.5 releases is Antelope. Note this can be a bit confusing as the first releases of 5.5 (until 5.5.7) introduced the new file format as being the default one, a decision that was later reversed to assure maximum compatibility in replication configurations comprised of servers running different versions of MySQL. To be sure about which file format is the one set as default in your server you can issue:

mysql> SHOW VARIABLES LIKE 'innodb_file_format';

The important lesson here that motivated me to write this post is that the file format can only be defined for tablespaces – not tables, in general. This is documented in the manual but maybe not entirely clear:

innodb_file_format: The file format to use for new InnoDB tables. Currently, Antelope and Barracuda are supported. This applies only for tables that have their own tablespace, so for it to have an effect, innodb_file_per_table must be enabled. The Barracuda file format is required for certain InnoDB features such as table compression.

Even if you configure your server with innodb_file_format=Barracuda and recreate the datadir and basic tables with the script mysql_install_db, the common tablespace will always use Antelope. So, to create tables under the new file format it is imperative you use innodb_file_per_table. That’s OK and clearly documented but what might be misleading here is the fact there’s no warning being issued if you set the file format to Barracuda and create a new compressed table without having innodb_one_file_per_table set. Not even in the error log. Here’s an example:

mysql> SET GLOBAL innodb_file_format=Barracuda;
Query OK, 0 rows affected (0.00 sec)

mysql> create table test.testA (id int) row_format=Compressed;
Query OK, 0 rows affected (2.71 sec)

Now, let’s take a look at what the INFORMATION_SCHEMA tell us about this table:

mysql> SELECT * FROM information_schema.tables WHERE table_schema='test' and table_name='testA'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: test
TABLE_NAME: testA
TABLE_TYPE: BASE TABLE
ENGINE: InnoDB
VERSION: 10
ROW_FORMAT: Compact
TABLE_ROWS: 0
AVG_ROW_LENGTH: 0
DATA_LENGTH: 16384
MAX_DATA_LENGTH: 0
INDEX_LENGTH: 0
DATA_FREE: 0
AUTO_INCREMENT: NULL
CREATE_TIME: 2014-01-07 14:21:05
UPDATE_TIME: NULL
CHECK_TIME: NULL
TABLE_COLLATION: latin1_swedish_ci
CHECKSUM: NULL
CREATE_OPTIONS: row_format=COMPRESSED
TABLE_COMMENT:
1 row in set (0.00 sec)

There’s two at-first-look “contradictory” fields here:

  •  ”ROW_FORMAT” says the table is using the Compact format while
  •  ”CREATE_OPTIONS” indicates “row_format=COMPRESSED” has been used when creating the table

The one to consider is ROW_FORMAT: CREATE_OPTION is used to store the options that were used at the moment the table was created and is evoked by the SHOW CREATE TABLE statement to “reconstruct” it:

mysql> show create table test.testA;
*************************** 1. row ***************************
Table: testA
Create Table: CREATE TABLE `testA` (
`id` int(11) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED
1 row in set (0.00 sec)

Conclusion

A customer contacted us asking how he could get a list of the tables using the compression format, which we can obtain by interrogating INFORMATION_SCHEMA:

mysql> SELECT TABLE_NAME, ROW_FORMAT FROM INFORMATION_SCHEMA.TABLES WHERE ROW_FORMAT=Compressed’;

To their surprise this returned an empty set. We verified that the tables created by them specified ROW_FORMAT=Compressed but as shown in this article this method is not to be trusted – “ask” the INFORMATION_SCHEMA instead.

[0] http://dev.mysql.com/doc/refman/5.6/en/innodb-compression.html
[1] http://dev.mysql.com/doc/refman/5.6/en/innodb-file-format-enabling.html
[2] http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_file_format

The post InnoDB file formats: Here is one pitfall to avoid appeared first on MySQL Performance Blog.

Webinars: Common (but deadly) MySQL mistakes

January 13, 2014 - 12:00am

On January 22 I’ll be presenting the first of a series of two new webinars focusing on avoiding common MySQL mistakes:

How to Avoid Common (but Deadly) MySQL Operations Mistakes.

“Don’t bother me with DBA ‘best practices,’ we have real work to get done.”

So go the famous last words from overworked IT managers (we’ve all been there at one time or another).

Best practices don’t have to interrupt your work or make it harder. Ideally, they should make database operations more predictable and less fraught with emergencies or surprises. This presentation provides a short list of common failures of DBA’s, and makes the case that you can save yourself time (and grey hair) by adopting better habits.

Some of the tips include:

  • How to know when and why your server changed configuration values.
  • How to have bulletproof backups.
  • How to save up to 20% of your database size at the same time as reducing the query optimizer’s work.

Then on March 5 I’ll present the second webinar in this series:

How to Avoid Common (but Deadly) MySQL Development Mistakes

“We need a database that ‘just works’ and runs at ‘web scale.’”

MySQL software developers are under pressure to do more in less time, and create applications that adapt to ever-changing requirements.

But it’s true that some of a developer’s time is wasted when their method of optimizing involves trying every combination of code and index definition. There must be a more straight path to achieve the best database code. This talk shows you some of these methods, including:

  • How to find out which indexes are the best fit for your MySQL application
  • How to protect yourself from the number one MySQL database security vulnerability on the web
  • How to decide when to optimize a MySQL database application with denormalization, indexes, caching, partitioning, sharding

At the end of this webinar, you’ll be more productive and confident as you develop MySQL database-driven applications.

Please join me!  Register for the first webinar or register for the second webinar!  Or register for both and get two for the price of one (just kidding; they’re both free)!

The post Webinars: Common (but deadly) MySQL mistakes appeared first on MySQL Performance Blog.

Pages

Contact Us 24 Hours A Day
Support Contact us 24×7
Emergency? Contact us for help now!
Sales North America (888) 316-9775 or
(208) 473-2904
Sales
Europe
+44-208-133-0309 (UK)
0-800-051-8984 (UK)
0-800-181-0665 (GER)
Training (855) 55TRAIN or
(925) 271-5054

 

Share This
]]>