EmergencyEMERGENCY? Get 24/7 Help Now!

devops webinar – follow up Q&A

 | November 6, 2012 |  Posted In: MySQL


First I wanted to thanks all the attendees and for the nice comments I got.
As promised during the webinar, these are the answers of the questions you asked.

Q: Does Percona provide plugin for cacti?

A: Yes we do. They are part of Percona Monitoring Plugins. You can see some examples here.

Q: What if replication is running legging on production?

A: The point when I say that you don’t want to receive alerts if replication is lagging on a slave used for backup for example is to explain that with alerting solutions, you need to filter the alerts. You need to reduce them to the minimum possible alerts that are critical for your production. Having too much alerts enabled reduce the attention your operation team could pay to them. This is a similar problem as in the story of Peter and the Wolf

Q: Did you hear about Liquibase, for syncing application code version and database version? Any opinion?

A: First I want to apologize that I didn’t understand the name when Terry asked the question during the call but I pronounce it usually using the French pronunciation. So to answer your question, Liquibase is probably the best and more mature solution used to perform database change management. However there are many other alternatives like Flyway, c5-db-migration, dbdeploy, mybatis, AutoPatch… but none I really like. Most of them are created for java projects and use XML definitions of your changes which I don’t really like. And for usability I like how mybatis work.

Q: When testing performance automatically, do you use real production data or generating test data?

A: When testing some features I use generated data (using sysbench for example), but when I need to validate changes for production (continuous integration), real production data is mandatory. All the test should be performed on a copy of the production database. The value of records are important but also the amount of record and the size of tables must be equivalent. A full table scan could be very fast on a table of 1M but when the table is 1T it becomes problematic.

Q: for version control of schemas, do you recommend full schema definition, full definitions with change deltas, or deltas?

That depends of the solution used. If you just run the ALTER statements from your configuration management when a file with those ALTERs is modified then deltas will be easier.

Q: When doing a migration that removes a column, moving the database first would make the old code break. How would you overcome this?

A: When you plan to remove a table, the schema that will be linked to the application version won’t have the table deleted. The next one will remove the column. This is a table to make it easier to understand:

1.0 1.1 1.2
Application use column col_a & col_b use column col_a no changes for that table
Table schema col_a & col_b col_a & col_b col_a

So even if in 1.2 the application doesn’t us any more col_b, we will wait for the next release (1.3) to really delete the column from the table.

Frederic Descamps

Frédéric joined Percona in June 2011, he is an experienced Open Source consultant with expertise in infrastructure projects as well in development tracks and database administration. Frédéric is a believer of devops culture.


Leave a Reply


Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.