EmergencyEMERGENCY? Get 24/7 Help Now!

MySQL 5.7 sysbench OLTP read-only results: is MySQL 5.7 really faster?

Latest MySQL Performance Blog posts - April 7, 2016 - 7:26am

This blog will look at MySQL 5.7 sysbench OLTP read-only results to determine if they are faster than previous versions.

As promised in my previous post, I have checked MySQL 5.7 performance against previous versions in a different workload. This time, I will use sysbench OLTP read-only transactions (read-write transactions are part of future research, as there is more tuning required to get the best performance in write workloads).

One important thing to mention is that MySQL 5.6 and 5.7 have special optimizations for READ-ONLY transactions. In MySQL 5.6, however,  you need to start a transaction with "START TRANSACTION READ ONLY" to get the optimization benefit. MySQL 5.7 automatically detects read-only transactions.

I’ve modified the sysbench oltp.lua script to use "START TRANSACTION READ ONLY" for MySQL 5.6. This optimization is not available in MySQL 5.5.

I also tried two different setups:

  • Local connections: the client (sysbench) and the server (mysqld) are running on the same server
  • Network connection: the client and server are connected by a 10GB network

Other details

  • CPU: 56 logical CPU threads servers, Intel® Xeon® CPU E5-2683 v3 @ 2.00GHz
  • sysbench 10 tables x 10 million rows, Pareto distribution
  • OS: Ubuntu 15.10 (Wily Werewolf)
  • Kernel 4.2.0-30-generic

More details with scripts and config files are available on our github.

Summary results can also be found here:

This post covers the most interesting highlights. First, the results on the local connections:

Looking at these results, I was as surprised as you probably are. On a high number of threads and by a significantly visible margin, MySQL 5.7 is actually slower than MySQL 5.6.

Let me show you the relative performance of MySQL 5.5 and MySQL 5.6 (having MySQL 5.7 as a baseline = 1.0):

With a lower number of threads, MySQL 5.5 outperforms MySQL 5.7 by 8-15%, and on a higher number of threads MySQL 5.6 is better by 6-7%.

To validate these findings, we can check the results on a remote connection. Here is a chart:

This gives us a similar picture, with a couple of key differences. MySQL 5.6 encounters scalability problems sooner, and the throughput declines. The fix for that is using innodb-thread-concurrency=64.

Here are the results:

In this round, I did not test scenarios over 1000 threads. But gauging from the results above it seems that MySQL 5.7 has problems. It is interesting to consider how it will affect replication performance – and I will test this after my read-write benchmarks.

 

Backup restoration in different directory.

Lastest Forum Posts - April 7, 2016 - 5:04am
Hello:

I'm making incremental backups using xtrabackup.
My problem is that I can't find how do I have to do to perform the restoration.

I'd like to restore (prepare) my backup into a different directory where the backup is.
Else I think that when I apply my incremental backups will overwrite full backup information.

At the moment I'm copying full backup directory in a tmp one and then I prepare the incremental backups.
I have 31 GB of backup and to copy it is a long process.

As I understand, if I apply incremental backups over a full backup I'll get the full backup after the incremental period. But I'll lose my previous full backup, isn't it?
I need to keep my full backup as it was before.

Anyone knows how could I prepare a incremental backup pointing a different directory?
I can't find any information about it.
I've tried using different options as --datadir, --tmpdir even --targetdir (that I don't rememeber where found it) but my tmp directory continues being empty.

Thanks!!

Cacti Monitoring on FreeBSD 10.2 issues with Apache graph (solved)

Lastest Forum Posts - April 7, 2016 - 12:02am
Problem:
"# /usr/local/bin/php -q /usr/local/www/cacti-0.8.8f/scripts/ss_get_by_ssh.php --host 213.555.555.55 --type apache --items gg
gg:-1"

After searching in forum threads, found the solution that wget no installed on both servers.
So , huge request for developers to add to the documentation that wget should be installed on both server`s

11 Days Until Percona Live: Justification, Julian Cash, Sponsor List

Latest MySQL Performance Blog posts - April 6, 2016 - 10:59am

Only 11 days until Percona Live! Are you registered?

It’s getting close to the Percona Live Data Performance Conference 2016! The conference starts Monday, April 18th. We have some quick updates and pieces of information to pass on to you, so keep reading to find out the details.

Need Help Justifying Your Attendance?

Haven’t been able to justify going to Percona Live to your boss? Here is a link that will help you with that.

Julian Cash X-RAY Light Painting Studio

Don’t forget that Julian Cash will be setting up an X-RAY Light Painting Studio in the Exhibition Hall for your amazement and amusement. Light Painting Portraits are a rare and incredible art form that Julian has pioneered. His interactive artwork at Percona Live is an example of Julian’s vision, which also was featured on America’s Got Talent.

He’s running a campaign to take Light Painting portraits in places where it would otherwise be impossible.  With your help, the studio will be equipped with the best technology imaginable, which will make for countless magical and fantastical images. Check it out!

This Year’s Sponsors

Our sponsors for Percona Live Data Performance Conference are set, and we want to thank them for helping us to put on this event. Below, you can see who sponsored Percona Live this year:

  • Diamond Plus Sponsor
    • Deep Information Science
    • RocksDB (Facebook)
  • Platinum
    • Rackspace
    • VividCortex
  • Gold
    • AWS
  • Silver
    • Yelp
    • Shopify
  • Exhibition Hall
    • Codership
    • Blackmesh
    • University of Michigan (DBSeer)
    • Vertabelo
    • Raintank (Grafana.net)
    • Red Hat
    • ScaleArc
    • SolarWinds
    • Pythian
    • AgilData
    • Box
    • Clustrix
    • MaxGauge
    • HGST
    • Severalnines
    • VMware
    • Eventbrite
    • MemSQL
  • Coffee Breaks
    • Mailchimp
  • Badge Lanyards and Conference Bags
    • Google
  • 50 Minute Breakout
    • Rackspace
    • Clustrix
  • Thirty-Minute Demo
    • Vertabelo
  • Data in the Cloud Track
    • Red Hat
    • Intel
  • Signage Sponsor
    • MONyog (Webyog)

Thanks again to all of our sponsors, and all of our attendees. If you haven’t registered yet, do it now! There are only 11 days left until the conference!

Percona Live: Advanced Percona XtraDB Cluster in a Nutshell, La Suite

Latest MySQL Performance Blog posts - April 6, 2016 - 9:30am

This blog post will discuss what is necessary of the Percona Live  Advanced Percona XtraDB Cluster tutorial.

Percona Live 2016 is happening in April! If you are attending, and you are registered to the Percona XtraDB Cluster (Galera) tutorial presented by Kenny and myself, please make sure that you:

  • Bring your laptop, this is a hands-on tutorial
  • Have Virtual Box 5 installed
  • Bring a machine that supports 64bit VMs
  • Have at least 5GB of free disk space

This advanced tutorial is a continuation of the beginners’ tutorial, so some basic experience with Percona XtraDB Cluster and Galera is required.

See you soon!

Description of the Percona Live Advanced Percona XtraDB Cluster Talk

Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the Galera synchronous replication library in a single product package, which enables you to create a cost-effective MySQL cluster. For three years at Percona Live, we’ve introduced people to this technology – but what’s next?

This tutorial continues your education and targets users that already have experience with Percona XtraDB Cluster and want to go further. This tutorial will cover the following topics:

  • Bootstrapping in details
  • Certification errors, understanding and preventing them
  • Replication failures, how to deal with them
  • Secrets of Galera Cache – Mastering flow control
  • Understanding and verifying replication throughput
  • How to use WAN replication
  • Implications of consistent reads
  • Backups
  • Load balancers and proxy protocol

Register for Percona Live now!

EXPLAIN FORMAT=JSON wrap-up

Latest MySQL Performance Blog posts - April 6, 2016 - 7:23am

This blog is an EXPLAIN FORMAT=JSON wrap-up for the series of posts I’ve done in the last few months.

In this series, we’ve discussed everything unique to EXPLAIN FORMAT=JSON. I intentionally skipped a description of members such as table_name, access_type  or select_id, which are not unique.

In this series, I only mentioned in passing members that replace information from the Extra column in the regular EXPLAIN output, such as using_join_buffer , partitions, using_temporary_table  or simply message. You can see these in queries like the following:

mysql> explain format=json select rand() from dual *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "message": "No tables used" } } 1 row in set, 1 warning (0.00 sec)

Or

mysql> explain format=json select emp_no from titles where 'Senior Engineer' = 'Senior Cat' *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "message": "Impossible WHERE" } } 1 row in set, 1 warning (0.01 sec)

Their use is fairly intuitive, similar to regular EXPLAIN, and I don’t think one can achieve anything from reading a blog post about each of them.

The only thing left to list is a Table of Contents for the series:

attached_condition: How EXPLAIN FORMAT=JSON can spell-check your queries

rows_examined_per_scan, rows_produced_per_join: EXPLAIN FORMAT=JSON answers on question “What number of filtered rows mean?”

used_columns: EXPLAIN FORMAT=JSON tells when you should use covered indexes

used_key_parts: EXPLAIN FORMAT=JSON provides insight into which part of multiple-column key is used

EXPLAIN FORMAT=JSON: everything about attached_subqueries, optimized_away_subqueries, materialized_from_subquery

EXPLAIN FORMAT=JSON provides insights on optimizer_switch effectiveness

EXPLAIN FORMAT=JSON: order_by_subqueries, group_by_subqueries details on subqueries in ORDER BY and GROUP BY

grouping_operation, duplicates_removal: EXPLAIN FORMAT=JSON has all details about GROUP BY

EXPLAIN FORMAT=JSON has details for subqueries in HAVING, nested selects and subqueries that update values

ordering_operation: EXPLAIN FORMAT=JSON knows everything about ORDER BY processing

EXPLAIN FORMAT=JSON knows everything about UNIONs: union_result and query_specifications

EXPLAIN FORMAT=JSON: buffer_result is not hidden!

EXPLAIN FORMAT=JSON: cost_info knows why optimizer prefers one index to another

EXPLAIN FORMAT=JSON: nested_loop makes JOIN hierarchy transparent

Thanks for following the series!

Failed to rejoin the cluster

Lastest Forum Posts - April 5, 2016 - 10:51pm
Dear all,
we have 4node Percona XtraDB Cluster v5.5.x and until yesterday all worked fine. Yesterday we had to temporarily disconnect one node due to maintenance and now we are not able to rejoin the cluster again.
Each time we start the node it fails after SST. I tried to completely deleted all files inside the MySQL datadir but the result is the same.
Could you please someone help?
Thanks in advance!

Webinar April 7, 10am PDT – Introduction to Troubleshooting Performance: What Affects Query Execution?

Latest MySQL Performance Blog posts - April 5, 2016 - 5:49pm

Join us for our latest webinar on Thursday, April 7, at 10 am PDT (UTC-7) on Introduction to Troubleshooting Performance: What Affects Query Execution?

MySQL installations experience a multitude of issues: server hangs, wrong data stored in the database, slow running queries, stopped replications, poor user connections and many others. It’s often difficult not only to troubleshoot these issues, but to even know which tools to use.

Slow running queries, threads stacking for ages during peak times, application performance suddenly lagging: these are some of the things on a long list of possible database performance issues. How can you figure out why your MySQL installation isn’t running as fast as you’d like?

In this introductory webinar, we will concentrate on the three main reasons for performance slowdown:

  • Poorly optimized queries
  • Concurrency issues
  • Effects of hardware and other system factors

This webinar will teach you how to identify and fix these issues. Register now.

If you can’t attend this webinar live, register anyway and we’ll send you a link to the recording.

Sveta Smirnova, Principal Technical Services Engineer.

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns which can solve typical issues quicker, teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of the book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

Percona Live featured talk with Anastasia Ailamaki — RAW: Fast queries on JIT databases

Latest MySQL Performance Blog posts - April 5, 2016 - 9:07am

Welcome to the next Percona Live featured talk with Percona Live Data Performance Conference 2016 speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference, as well as discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live registration bonus!

In this Percona Live featured talk, we’ll meet Anastasia Ailamaki, Professor and CEO, EPFL and RAW Labs. Her talk will be RAW: Fast queries on JIT databases. RAW is a query engine that reads data in its raw format and processes queries using adaptive, just-in-time operators. The key insight is its use of virtualization and dynamic generation of operators. I had a chance to speak with Anastasia and learn a bit more about RAW and JIT databases:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it.

Anastasia: I am a computer engineer and initially trained on networks. I came across databases in the midst of the object-oriented hype — and was totally smitten by both the power of data models and the wealth of problems one had to solve to create a functioning and performant database system. In the following years, I built several systems as a student and (later) as a coder. At some point, however, I needed to learn more about the machine. I decided to do a Masters in computer architecture, which led to a Ph.D. in databases and microarchitecture. I became a professor at CMU, where for eight years I guided students as they built their ideas into real systems that assessed their ideas potential and value. During my sabbatical at EPFL, I was fascinated by the talent and opportunities in Switzerland — I decided to stay and, seven years later, co-founded RAW Labs.

Percona: Your talk is going to be on “RAW: Fast queries on JIT databases.” Would you say you’re an advocate of abandoning (or at least not relying on) the traditional “big structured database accessed by queries” model that have existed for most of computing? Why?

Anastasia: The classical usage paradigm for databases has been “create a database, then ask queries.” Traditionally, “creating a database” means creating a structured copy of the entire dataset. This is now passé for the simple reason that data is growing too fast, and loading overhead grows with data size. What’s more, we typically use only a small fraction of the data available, and investing in the mass of owned data is a waste of resources — people have to wait too long from the time they receive a dataset until they can ask a query. And it doesn’t stop there: the users are asked to pick a database engine based on the format and intended use of the data. We associate row stores to transactions, NoSQL to JSON, and column stores to analytics, but true insight comes from combining all of the data semantically as opposed to structurally. With each engine optimizing for specific kinds of queries and data formats, analysts subconsciously factor in limitations when piecing together their infrastructure. We only know the best way to structure data when we see the queries, so loading data and developing query processing operators before knowing the queries is premature.

Percona: What are the conditions that make JIT databases in general (and RAW specifically) the optimum solution?

Anastasia: JIT databases push functionality to the last minute, and execute it right when it’s actually needed. Several systems perform JIT compilation of queries, which offer great performance benefits (an example is Hyper, a system recently acquired by Tableau). RAW is JIT on steroids: it leaves data at its source and only reads it or asks for any system resources when they’re actually required. You may have 10000 files, and a file will only be read when you ask a query that needs the data in it. With RAW, when the user asks a query the RAW code-generates raw source data adaptors and the entire query engine needed to run the query. It stores all useful information about the accessed data, as well as popular operators generated in the past, and uses them to accelerate future queries. It adapts to system resources on the fly and only asks for them when needed. RAW is an interface to raw data and operational databases, and uses them to accelerate future queries. It adapts to system resources on the fly and only asks for them when needed. In addition, the RAW query language is incredibly rich; it is a superset of SQL which allows navigation on hierarchical data and tables at the same time, with support for variable assignments, regular expressions, and more for log processing — while staying in declarative land. Therefore, the analysts only need to describe the desired result in SQL, without thinking of data format.

Percona: What would you say in the next step for JIT and RAW? What keeps you up at night concerning the future of this approach?

Anastasia: The next step for RAW is to reach out to as many people as possible — especially users with complex operational data pipelines — and reduce cost and eliminate pipeline stages, unneeded data copies, and extensive scripting. RAW is a new approach that can work with existing infrastructures in a non-intrusive way. We are well on our way with several proof-of-concept projects that create verticals for RAW, and demonstrate its usefulness for different applications.

Percona: What are you most looking forward to at Percona Live Data Performance Conference 2016?

Anastasia: I am looking forward to meeting as many users and developers as possible, hearing their feedback on RAW and our ideas, and learning from their experiences.

You can read more about RAW and JIT databases at Anastasia’s academic group’s website: dias.epfl.ch.

Want to find out more about Anastasia and RAW? Register for Percona Live Data Performance Conference 2016, and see her talk RAW: Fast queries on JIT databases. Use the code “FeaturedTalk” and receive $100 off the current registration price!

Percona Live Data Performance Conference 2016 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 18-21 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

xtrabackup dumps only one DB

Lastest Forum Posts - April 5, 2016 - 5:52am
Hello,

I'm trying to backup whole MySQL instance, running:
sudo xtrabackup --host=db.server.com --user=root --password=secret.8 --backup --target-dir="/tmp/backup_dir"

It copies mysql, performance_schema, test_staging DBs properly, and in the last dir: test_production it is only db.opt file without any table files. Any hint how to fix it?

ls /tmp/backup_dir
tmp/test/2016-04-05_12-10-59# ls
Code: backup-my.cnf test_production mysql xtrabackup_binlog_info xtrabackup_info ibdata1 test_staging performance_schema xtrabackup_checkpoints xtrabackup_logfile Code: # ls -l mysql | wc -l 80 # ls -l performance_schema/ | wc -l 54 # ls -l test_staging | wc -l 626 # ls -l test_production/ | wc -l 2 Both test_* should contain same tables (but with another data).

Data in the Cloud track at Percona Live with Brent Compton and Ross Turk: The Data Performance Cloud

Latest MySQL Performance Blog posts - April 4, 2016 - 5:03pm

In this blog, we’ll discuss the Data in the Cloud track at Percona Live with Red Hat’s Brent Compton and Ross Turk.

Welcome to another interview with the Percona Live Data Performance Conference speakers and presenters. This series of blog posts will highlight some of the talks and presentations available at Percona Live Data Performance Conference April 18-21 in Santa Clara. Read through to the end for a discounts for Percona Live registration.

(A webinar sneak preview of their “MySQL on Ceph” cloud storage talk is happening on Wednesday, April 6th at 2 pm EDT. You can register for it here – all attendees will receive a special $200 discount code for Percona Live registration after the webinar! See the end of this blog post for more details!)

First, we need to establish some context. Data storage has traditionally, and for most of its existence, pretty much followed a consistent model: stable and fairly static big box devices that were purpose-built to house data. Needing more storage space meant obtaining more (or bigger) boxes. Classic scale-up storage. Need more, go to the data storage vendor and order a bigger box.

The problem is that data is exploding, and has been exponentially for the last decade. Some estimates put the amount of data being generated worldwide increasing at a rate of 40%-60% per year. That kind of increase, and at that speed, doesn’t leave a lot of ramp up time to make long term big box hardware investments. Things are changing too fast.

The immediate trend – evident by declining revenues of class storage boxes – is placing data in a cloud of scale-out storage. What is the cloud? Since that question has whole books devoted to it, let’s try to simplify it a bit.

Cloud computing benefits include scalability, instantaneous configuration, virtualized consumables and the ability to quickly expand base specifications. Moving workloads to the cloud brings with it numerous business benefits, including agility, focus and cost:

  • Agility. The cloud enables businesses to react to changing needs. As the workload grows or spikes, just add compute cycles, storage, and bandwidth with the click of a mouse.
  • Focus. Deploying workloads to the cloud enables companies to focus more resources on business-critical activities, rather than system administration.
  • Cost. Businesses can pay as they go for the services level they need. Planning and sinking money into long-term plans that may or may not pan out is not as big a problem.

When it comes to moving workloads into the cloud, the low throughput applications were the obvious first choice: email, non-critical business functions, team collaboration assistance. These generally are neither mission critical, nor require high levels of security. As applications driven services became more and more prevalent (think Netflix, Facebook, Instagram), more throughput intensive services were moved to the cloud – mainly for flexibility during service spikes and to accommodate increased users. But tried and true high-performance workloads like databases and other corporate kingdoms that have perceived higher security requirements have traditionally remained stuck in the old infrastructures that have served well – until now.

So what is this all leading to? Well, according to Brent and Ross, ALL data will eventually be going to the cloud, and the old models of storage infrastructure are falling by the wayside. Between the lack of elasticity and scalability of purpose-built hardware, and the oncoming storage crisis, database storage is headed for cloud services solutions.

I had some time to talk with Brent and Ross about data in the cloud, and what we can expect regarding a new data performance cloud model.

Percona: There is always a lot of talk about public versus private paradigms when it comes to cloud discussions. To you, this is fairly inconsequential. How do see “the cloud?” How would you define it terms of infrastructure for workloads?

RHT: Red Hat has long provided software for hybrid clouds, with the understanding that most companies will use a mix of public cloud and private cloud infrastructure for their workloads. This means that Red Hat software is supported both on popular public cloud platforms (such as AWS, Azure, and GCE) as well as on-premise platforms (such as OpenStack private clouds). Our work with Percona in providing a reference architecture for MySQL running on Ceph is all about giving app developers a comparable, deterministic experience when running their MySQL-based apps on a Ceph private storage cloud v. running them in the public cloud.

Percona: So, your contention is that ALL data is headed to the cloud. What are the factors that are going ramp up this trend? What level of information storage will cement this as inevitable?

RHT:  We’d probably restate this to “most data is headed to A cloud.” Two distinctions being made in this statement. The first is “most” versus “all” data.  For years to come, there will be late adopters with on-premise data NOT being served through a private cloud infrastructure. The second distinction is “a” cloud versus “the” cloud.  “A” cloud means either a public cloud or a private cloud (or some hybrid of the two). Private clouds are being constructed by the world’s most advanced companies within their own data centers to provide a similar type of elastic infrastructure with dynamic provisioning and lower CAPEX/OPEX costs (as is found in public clouds).

Percona: What are the concerns you see with moving all workloads to the cloud, and how would you address those concerns?

RHT:  The distinctions laid out in the previous answer address this. For myriad reasons, some data and workloads will reside on-premise within private clouds for a very long time. In fact, as the technology matures for building private clouds (as we’re seeing with OpenStack and Ceph), and can offer many of the same benefits as public clouds, we see the market reaching an equilibrium of sorts. In this equilibrium many of the agility, flexibility, and cost benefits once available only through public cloud services will be matched by private cloud installations. This will re-base the public versus private cloud discussion to fewer, simpler trade-offs – such as which data must reside on-premises to meet an enterprise’s data governance and control requirements.

Percona: So you mentioned the “Data Performance Cloud”? How would you describe that that is, and how it affects enterprises?

RHT:  For many enterprises, data performance workloads have been the last category of workloads to move a cloud, whether public or private. Public cloud services, such as AWS Relational Database Service with Provisioned-IOPS storage, have illustrated improved data performance for many workloads once relegated to the cloud sidelines. Now, with guidelines in the reference architecture being produced by Percona and the Red Hat Ceph team, customers can achieve comparable data performance on their private Ceph storage clouds as they do with high-performance public cloud services.

Percona: What can people expect to get out of the Data in the Cloud track at Percona Live this year?

RHT: Architecture guidelines for building and optimizing MySQL databases on a Ceph private storage cloud.   These architectures will include public cloud benefits along with private cloud control and governance.

Want to find out more about MySQL, Ceph, and Data in the Cloud? Register for Percona Live Data Performance Conference 2016, and see Red Hat’s sponsored Data in the Cloud Keynote Panel: Cloudy with a chance of running out of disk space? Or Sunny times ahead? Use the code “FeaturedTalk” and receive $100 off the current registration price!

The Percona Live Data Performance Conference is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 18-21 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

MySQL and Ceph: Database-as-a-Service sneak preview

Businesses are familiar with running a Database-as-a-Service (DBaaS) in the public cloud. They enjoy the benefits of on-demand infrastructure for spinning-up lots of MySQL instances with predictable performance, without the headaches of managing them on specific, bare-metal highly available clusters.

This webinar lays the foundation for building a DBaaS on your own private cloud, enabled by Red Hat® Ceph Storage. Join senior architects from Red Hat and Percona for reference architecture tips and head-to-head performance results of MySQL on Ceph versus MySQL on AWS.

This is a sneak preview of the labs and talks to be given in April 2016 at the Percona Live Data
Performance Conference
. Attendees received a discount code for $200 off Percona Live registration!

Speakers:

  • Brent Compton, director, Storage Solution Architectures, Red Hat
  • Kyle Bader, senior solutions architect, Red Hat
  • Yves Trudeau, principal consultant, Percona

Join the live event:

Wednesday, April 6, 2016 | 2 p.m. ET | 11 a.m. PT

Time zone converter

See Bill Nye the Science Guy at Percona Live and help change the world: a special offer!

Latest MySQL Performance Blog posts - April 4, 2016 - 9:47am

See Bill Nye the Science Guy at Percona Live Data Performance Conference, and help an excellent cause!

The best science is built on solid data. As a world-renown icon in tech and geek circles everywhere, Bill Nye fights to raise awareness of the value of science, critical thinking, and reason. He hopes that the data he brings will help inspire people everywhere to change the world. And seeing as the open source community is full of science-minded individuals, he is excited to speak to everyone at Percona Live!

Since his early days as a comedian, to creating his well-known Science Guy character, to the present day, Bill Nye has always brought the impressive and illuminating power of science to people.

Bill Nye’s keynote speech at Percona Live is “Bill Nye’s Objective – Change the World!” Through his talks, books, and day job as the CEO of The Planetary Society (the world’s largest non-governmental space interest organization), Bill wants to get people involved in the power of science. Science can teach people about the world, and how they can influence and change it. Science helps us to understand what Bill likes to call “our place in space.”

And now you can help change the world, just by attending The Percona Live Data Performance Conference! For a limited time, if you buy a Keynote or Expo pass to Percona Live using the promo code “NYE” you will get the pass for just $10, AND all the money from these registrations will be donated to The Planetary Society. The Planetary Society sponsors projects that will seed innovative space technologies, nurtures creative young minds, and is a vital advocate for our future in space. Their mission is to empower the world’s citizens to advance space science and exploration.

A great deal, and a great cause! This offer is limited to the first 250 registrations, so hurry up and help change the world!

why percona running some mysql thread

Lastest Forum Posts - April 2, 2016 - 12:50am
i have a problem and confused ,when i install mysql and i see only a thread running when i use top command or use lsof -i:3306,but when i install percona ,i have found some mysql thread are running,why?

Certification failed for TO isolated action

Lastest Forum Posts - April 1, 2016 - 10:31pm
Dears,

from time to time my cluster shut it self down
i have no idea what is "Certification failed for TO isolated action"


2016-04-01 19:18:24 20379 [ERROR] WSREP: Certification failed for TO isolated action: source: fb18098c-f5d5-11e5-b23b-d6c2f78040de version: 3 local: 1 state: CERTIFYING flags: 65 conn_id: 9120130 trx_id: -1 seqnos (l: 2130783, g: 3969655, s: 3969649, d: -1, ts: 12265127177903945)
2016-04-01 19:18:24 20379 [Note] WSREP: /usr/sbin/mysqld: Terminated.

and here is the full log

2016-04-01 19:17:39 20379 [Note] WSREP: (fb18098c, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://33.33.33.33:4567
2016-04-01 19:17:41 20379 [Note] WSREP: (fb18098c, 'tcp://0.0.0.0:4567') reconnecting to 86935180 (tcp://45.79.179.6:4567), attempt 0

2016-04-01 19:18:24 20379 [Note] WSREP: evs:roto(fb18098c, GATHER, view_id(REG,37acae0e,72)) suspecting node: 86935180
2016-04-01 19:18:24 20379 [Note] WSREP: evs:roto(fb18098c, GATHER, view_id(REG,37acae0e,72)) suspected node without join message, declaring inactive
2016-04-01 19:18:24 20379 [Note] WSREP: declaring 37acae0e at tcp://11.11.11.11:4567 stable
2016-04-01 19:18:24 20379 [Note] WSREP: Node 37acae0e state prim
2016-04-01 19:18:24 20379 [Note] WSREP: view(view_id(PRIM,37acae0e,73) memb {
2016-04-01 19:18:24 20379 [Note] WSREP: save pc into disk
2016-04-01 19:18:24 20379 [Note] WSREP: forgetting 86935180 (tcp://33.33.33.33:4567)
2016-04-01 19:18:24 20379 [Note] WSREP: (fb18098c, 'tcp://0.0.0.0:4567') turning message relay requesting off
2016-04-01 19:18:24 20379 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
2016-04-01 19:18:24 20379 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
2016-04-01 19:18:24 20379 [Note] WSREP: STATE EXCHANGE: sent state msg: 47d61e65-f82e-11e5-a461-8a8519c6c4a4
2016-04-01 19:18:24 20379 [Note] WSREP: STATE EXCHANGE: got state msg: 47d61e65-f82e-11e5-a461-8a8519c6c4a4 from 0 (server1)
2016-04-01 19:18:24 20379 [Note] WSREP: STATE EXCHANGE: got state msg: 47d61e65-f82e-11e5-a461-8a8519c6c4a4 from 1 (server2)
2016-04-01 19:18:24 20379 [Note] WSREP: Quorum results:
2016-04-01 19:18:24 20379 [Note] WSREP: Flow-control interval: [91, 91]
2016-04-01 19:18:24 20379 [Note] WSREP: New cluster view: global state: 26f7d687-f400-11e5-a284-7a3269f0619f:3969651, view# 20: Primary, number of nodes: 2, my index: 1, protocol version 3
2016-04-01 19:18:24 20379 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2016-04-01 19:18:24 20379 [Note] WSREP: REPL Protocols: 7 (3, 2)
2016-04-01 19:18:24 20379 [Note] WSREP: Service thread queue flushed.
2016-04-01 19:18:24 20379 [Note] WSREP: Assign initial position for certification: 3969651, protocol version: 3
2016-04-01 19:18:24 20379 [Note] WSREP: Service thread queue flushed.
2016-04-01 19:18:24 20379 [ERROR] WSREP: Certification failed for TO isolated action: source: fb18098c-f5d5-11e5-b23b-d6c2f78040de version: 3 local: 1 state: CERTIFYING flags: 65 conn_id: 9120119 trx_id: -1 seqnos (l: 2130781, g: 3969653, s: 3969620, d: -1, ts: 12265127145999324)
2016-04-01 19:18:24 20379 [Note] WSREP: Closing send monitor...
2016-04-01 19:18:24 20379 [Note] WSREP: Closed send monitor.
2016-04-01 19:18:24 20379 [Note] WSREP: gcomm: terminating thread
2016-04-01 19:18:24 20379 [Note] WSREP: gcomm: joining thread
2016-04-01 19:18:24 20379 [ERROR] WSREP: Certification failed for TO isolated action: source: fb18098c-f5d5-11e5-b23b-d6c2f78040de version: 3 local: 1 state: CERTIFYING flags: 65 conn_id: 9120130 trx_id: -1 seqnos (l: 2130783, g: 3969655, s: 3969649, d: -1, ts: 12265127177903945)
2016-04-01 19:18:24 20379 [Note] WSREP: /usr/sbin/mysqld: Terminated.

Also after installing Percona XtraDB i got a lot of access denied in logs

2016-04-01 19:18:09 20379 [Warning] Access denied for user 'root'@'localhost' (using password: NO)
2016-04-01 19:18:12 20379 [Warning] Access denied for user 'root'@'localhost' (using password: NO)
2016-04-01 19:18:15 20379 [Warning] Access denied for user 'root'@'localhost' (using password: NO)


Waiting for any suggestions
Regards

MyISAM and frm files

Lastest Forum Posts - April 1, 2016 - 1:27pm
Hi there,
So I see the notice here
https://www.percona.com/doc/percona-...k_restore.html

Because xtrabackup doesn’t copy MyISAM files, .frm files, and the rest of the database, you might need to back those up separately.

Did innobackupex copy these files?
If so, I don't understand why it is being retired whereas xtrabackup is much more limited without innobackupex.
I don't see the point of using 2 utilities to backup my datadir.

What is the solution to this?
Is innobackupex or another wrapper coming back to take care of these limitations?

Thanks

Percona XtraBackup 2.4.2 is now available

Latest MySQL Performance Blog posts - April 1, 2016 - 7:30am

Percona is glad to announce the first GA release of Percona XtraBackup 2.4.2 on April 1st, 2016. Downloads are available from our download site and from apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

New Features:

Bugs Fixed:

  • When backup was taken on MariaDB 10 with GTID enabled, Percona XtraBackup didn’t store gtid_slave_pos in xtrabackup_slave_info but logged it only to STDERR. Bug fixed #1404484.
  • Backup process would fail if --throttle option was used. Bug fixed #1554235.

Release notes with all the bugfixes for Percona XtraBackup 2.4.2 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Fixing MySQL Bug#2: now MySQL makes toast!

Latest MySQL Performance Blog posts - April 1, 2016 - 7:15am

Historical MySQL Bug#2, opened 12 Sep 2002, states that MySQL Connector/J doesn’t make toast. It hasn’t been fixed for more than 14 years. I’ve finally created a patch for it.

First of all: why only fix this for MySQL Connector/J? We should make sure the server can do this for any implementation! With this fix, now MySQL server (starting with version 5.1) can make toast.

There are few dependences though (see assembled setup picture):

  1. Raspberry Pi + PiFace shield
  2. Power switch relay (I’ve used an IoT Power Relay)
  3. Toaster oven (any cheap mechanical model will work)

Patch:

  1. Make_toast binary, which is run on the Raspberry Pi and PiFace interface (you’ll need to install the PiFace library):
    #!/usr/bin/python import sys from time import sleep if len(sys.argv) == 2 and sys.argv[1].isdigit(): toast_time = sys.argv[1] else: toast_time = 10 print "Toasting for " + str(toast_time) + " seconds..." import pifacedigitalio as p try: p.init() p.digital_write(7,1) sleep(float(toast_time)) p.digital_write(7,0) except (KeyboardInterrupt, SystemExit): print "Exiting and turning off heat..." p.digital_write(7,0) sys.exit(1) print "Your toast is ready! Enjoy! "
  2. MySQL UDF, based on lib_mysqludf_sys, which calls the make_toast binary:
    char* make_toast( UDF_INIT *initid , UDF_ARGS *args , char* result , unsigned long* length , char *is_null , char *error ){ FILE *pipe; char line[1024]; unsigned long outlen, linelen; char buf[40]; result = malloc(1); outlen = 0; sprintf(buf, "make_toast %s", args->args[0]); pipe = popen(buf, "r"); while (fgets(line, sizeof(line), pipe) != NULL) { linelen = strlen(line); result = realloc(result, outlen + linelen); strncpy(result + outlen, line, linelen); outlen = outlen + linelen; } pclose(pipe); if (!(*result) || result == NULL) { *is_null = 1; } else { result[outlen] = 0x00; *length = strlen(result); } return result; }

Usage:

mysql> call make_toast(300)

Demo picture (thanks to my colleague Fernando Laudares Camargos), actual video will follow:

Implementation details:

Hardware/wiring

The relay switch powers on the toaster oven, and no modifications are needed to the oven itself. Make sure the timer is set to 30 min initially, the Raspberry Pi/MySQL UDF will now control how long you toast the bread.

The setup wiring is super easy (but may be counterintuitive if you are used to working with Arduino): use the output pins (image) and connect 5v on the PiFace to the “+” sign on the relay switch, and one of the pins to the “-” sign on the relay switch.

Software install

  1. Install PiFace software and Python bindings
  2. Test the make_toast python script
  3. Add user “mysql” to the spi and gpio groups so it can manipulate pins:
    # gpasswd -a mysql gpio # gpasswd -a mysql spi
  4. Download the make toast UDF code and run install.sh.

mysql> call make_toast(300);

Enjoy your toast when it is hot!

Could I partition existed collection ?

Lastest Forum Posts - April 1, 2016 - 3:44am
Hi,
I've collection with 15gb of data, could I partition it now?

I'm interested in 2 options:
1) I've timestamp field in collection, so could I partition current data?
2) Could I just add new partitions daily and remove old one(30 days rotation)?

innobackupex produces compressed archives (can't apply logs)

Lastest Forum Posts - March 31, 2016 - 7:55pm
I am using "innobackupex --export --tables-file=/root/tables.txt /tmp/backup".

I've noticed that after the backup I can't apply logs immediately ("innobackupex --apply-log /tmp/backup/2016-02-14_03-43-42") and I need to decompress the files first ("innobackupex --decompress /tmp/backup/2016-02-14_03-43-42").

Without applying the logs my files are corrupted.
+ twice as time is wasted on compression/decompression.

Is there a way to export through innobackupex without compression? I love innobackup because it can export files in parallel.



MongoDB at Percona Live: A Special Open Source Community Discount

Latest MySQL Performance Blog posts - March 31, 2016 - 12:32pm

We want MongoDB at Percona Live!

One of the main goals of the Percona Live Data Performance Conference 2016 is celebrating and embracing the open source community. The community’s spirit of innovation, expertise and competition has produced incredible software, hardware, processes and products.

The open source community is a diverse and powerful collection of companies, organizations and individuals that have helped to literally change the world. Percona is proud to call itself a member of the open source community, and we strongly feel that upholding the principles of the community is a key to our success. These principals include an open dialog, an open mind, and a zeal for cooperative interaction. Together, we can create amazing things.

That’s why we were surprised when MongoDB declined to have us sponsor or speak at MongoDB World 2016, and even more taken aback when we were told our engineers are not welcome to attend the show. We make a special point of inviting competitors to participate in, speak at, and sponsor Percona Live – MongoDB included. We welcome our competitors to speak, sponsor and attend Percona Live because it is in the greater interest of the community at large to include all voices.

With that in mind, we’d like to extend a special offer to any MongoDB employees: sign up for the Percona Live Data Performance Conference 2016 using your company email, and receive a special VIP 25% discount off the registration price (use promo code “mongodb”).

In addition:

  • We invite all MongoDB attendees to a VIP cocktail reception with the Percona Team Tuesday, April 19th from 5-6pm
  • Percona is pleased to host all MongoDB attendees as special guests at the Tuesday, April 19th, Community Dinner Event at Pedro’s

It’s our way of showing our solidarity with the open source community, and expressing our belief that we work best when we work together.

See you all at Percona Live! Register here!



General Inquiries

For general inquiries, please send us your question and someone will contact you.