Tag - Replication

Percona Live Europe 2015 conference, tutorials schedule now available

The conference and tutorial schedule for Percona Live Europe 2015, September 21-23 in Amsterdam, was published this morning and this year’s event will focus on MySQL, NoSQL and Data in the Cloud.
Conference sessions, which will follow each morning’s keynote addresses, feature a variety of formal tracks and sessions. Topic areas include: high availability (HA), […]

Read more

Keep your MySQL data in sync when using Tungsten Replicator

MySQL replication isn’t perfect and sometimes our data gets out of sync, either by a failure in replication or human intervention. We are all familiar with Percona Toolkit’s pt-table-checksum and pt-table-sync to help us check and fix data inconsistencies – but imagine the following scenario where we mix regular replication with the Tungsten Replicator:

We […]

Read more

Introducing ‘MySQL 101,’ a 2-day intensive educational track at Percona Live this April 15-16

Talking with Percona Live attendees last year I heard a couple of common themes. First, people told me that there is a lot of great advanced content at Percona Live but there is not much for people just starting to learn the ropes with MySQL. Second, they would like us to find a way […]

Read more

Multi-Valued INSERTs, AUTO_INCREMENT & Percona XtraDB Cluster

A common migration path from standalone MySQL/Percona Server to a Percona XtraDB Cluster (PXC) environment involves some measure of time where one node in the new cluster has been configured as a slave of the production master that the cluster is slated to replace. In this way, the new cluster acts as a slave […]

Read more

Using MySQL 5.6 Global Transaction IDs (GTIDs) in production: Q&A

Thank you to all of you who attended my webinar last week about Global Transaction IDs (GTIDs), which were introduced in MySQL 5.6 to make the reconfiguration of replication straightforward. If you missed my webinar, you can still listen to the recording and download the sides (free). We had a lot of questions during […]

Read more

Explaining Ark Part 4: Fixing Majority Write Concern

This is the fourth post in a series of posts that explains Ark, a consensus algorithm we’ve developed for TokuMX and MongoDB to fix known issues in elections and failover. The tech report we released describes the algorithm in full detail. These posts are a layman’s explanation. This post assumes the reader is familiar […]

Read more

Explaining Ark Part 3: Why Data May Be Lost on a Failover

This is the third post in a series of posts that explains Ark, a consensus algorithm we’ve developed for TokuMX and MongoDB to fix known issues in elections and failover. The tech report we released last week describes the algorithm in full detail. These posts are a layman’s explanation.
In the first post, I discussed […]

Read more

Explaining Ark Part 2: How Elections and Failover Currently Work

This is the second post in a series of posts that explains Ark, a consensus algorithm we’ve developed for TokuMX and MongoDB to fix known issues in elections and failover. The tech report we released last week describes the algorithm in full detail. These posts are a layman’s explanation.
In the first post, I described […]

Read more

Explaining Ark Part 1: The Basics

Last week, we introduced Ark, a consensus algorithm similar to Raft and Paxos we’ve developed for TokuMX and MongoDB. The purpose of Ark is to fix known issues in elections and failover. While the tech report detailing Ark explains everything formally, over the next few blog posts, I will go over Ark in layman’s […]

Read more

Introducing Ark: A Consensus Algorithm For TokuMX and MongoDB

Most of the time, our blog posts explain what’s great about the MongoDB improvements we’ve already shipped in TokuMX. Sometimes, though, it’s fun to talk about what’s coming soon, especially when user feedback would really help get the feature right. In my next series of blog posts, I get to geek out and talk […]

Read more