Orchestrator is a MySQL topology manager and a failover solution, used in production on many large MySQL installments. It allows for detecting, querying and refactoring complex replication topologies, and provides reliable failure detection and intelligent recovery and promotion.
This practical tutorial focuses on and demonstrates Orchestrator's failure detection and recovery, and provides real-world examples and cookbooks for handling failovers.
- Brief introduction to Orchestrator
- Brief overview of basic configuration
- Reliable detection
- The complexity of successful failover
- Orchestrator's approach to failover
- Failover meta: anti-flapping, acknowledgments, auditing, downtime, promotion rules
- Master service discovery schemes: VIP, DNS, Proxy, Consul
- Cookbooks and considerations for master service discovery and for failover configuration
We will run demos in class. As time allows, the attendees may have time for hands-on operations.
Table migrations remain a pain point for MySQL DBAs. There are more options than ever for running migrations, with the later versions' in-place alters and new third-party tools (like gh-ost). But with the increase in tools and procedures it's been shown that there is no one-size-fits-all tool. Depending on the table size, available disk space, database traffic, server performance and SLAs, some migration methods make more sense than others.
In this tutorial we will discuss and demonstrate the different tools and methods and the best practices and scenarios for each.
Optional Lab Requirements:
- MacOS or Linux laptop (or VM)
- MySQL Sandbox, Percona Toolkit, gh-ost, sysbench
- MySQL 5.7 generic binary (for MySQL Sandbox)
Migration Concepts and Types
- Straight and in-place ALTER TABLE
- Alter on replicas, then promote
Caveats and Best Practices
- Test each of the migration types in a database cluster
AWS Aurora is one of the promising cloud-based RDBMS Solution. The main reason for the Aurora success is because it's based on InnoDB storage engine.
In this session, We are going to talk about how can you plan for Aurora migration in an efficient way using Terraform & Percona Products. We will be sharing our Terraform code to launch AWS Aurora clusters, tricks to check data consistency, migration path and effective monitoring using PMM.
The abstract of the session includes :
1. Why AWS Aurora? Future of AWS Aurora?
2. Build Aurora Infra
i. Using Terraform (Without Data)
ii. Restore Using Terraform & Percona XtraBackup (Using AWS S3 Bucket)
3. Verify data Consistency
4. Aurora Migration
i. 1:1 Migration
ii. Many:1 migration using Percona Server multi-source replication
5. Show Benchmarks and PMM dashboard
The latest developments and the enticing roadmap, show that MySQL Replication is addressing requirements in different areas such as operations, flexibility, elasticity, automation and seamless scale-out. Moreover, new replication features appear not only in MySQL 8 but also in MySQL 5.7, as the list of backports in the release changelogs show.
Come and join the engineers behind the product to get to know the latest and greatest replication features and how these enable the creation of rock solid, scalable and resilient database services able to keep up with the most demanding work and fault-loads. Take this opportunity to expend your MySQL knowledge and learn more about hot
topics such as Group Replication.
X-DB is Alibaba's next generation distributed and intelligent database which is ACID compliant, horizontally scalable, globally deployed and highly available. Motivated by the ideas of decoupling compute and storage, and intelligent database, we proposed a hardware/software co-designed architecture for X-DB to pursue extreme performance cost ratio, in order to support the world's largest and still fast-growing e-commerce platform. In this talk we'll introduce the work we have done in X-DB's SQL Engine.
? Plan Cache: a plus to MySQL Engine which boosts QPS by up to 170% on sysbench workloads and 34% on Alibaba's online purchasing system. We'll explain details of how it is created and used to skip heavy index dives at optimization stage, the efficient cache management, and the performance benefits.
? A state-of-art distributed SQL processing framework: the key component of X-DB built on top of MySQL Engine. We'll take a deep dive into the architecture, implementation details and leveraged technologies (i.e. high-performance RPC and scheduling sub-systems). We'll also share the achievement and lessons learnt so far, as well as our roadmap.
Offering MySQL, PostgreSQL and MariaDB database services in the cloud is different than doing so on-premise. Latency, connection redirection, optimal performance configuration are just a few challenges. In this session, Jun Su will walk you through Microsoft's journey to not only offer these popular OSS RDBMS in Microsoft Azure, but how it is implemented as a true DBaaS. Learn about Microsoft's Azure Database Services platform architecture, and how these services are built to scale.
Existing tools like mysqldump and replication cannot migrate data between GTID-enabled MySQL and non-GTID-enabled MySQL -- a common configuration across multiple cloud providers that cannot be changed. These tools are also cumbersome to operate and error-prone, thus requiring a DBA's attention for each data migration. We introduced a tool that allows for easy migration of data between MySQL databases with constant downtime on the order of seconds.
Inspired by gh-ost, our tool is named Ghostferry and allows application developers at Shopify to migrate data without assistance from DBAs. It has been used to rebalance sharded data across databases. We plan to open source Ghostferry at the conference so that anyone can migrate their own data with minimal hassle and downtime. Since Ghostferry is written as a library, you can use it to build specialized data movers that move arbitrary subsets of data from one database to another.
Determining the best and most suitable relational database management system (RDBMS) for a given project isn't an easy task and can be rather challenging at times. It is like benchmarking fast cars created by different racing teams!
The presentation will compare, using a large body of experimental results, two highly-available cloud closed-source products: Amazon Aurora MySQL and RDS MySQL. Both are based on the Open Source MySQL Edition.
Both use cases have demonstrated that MySQL is a great solution for concurrent writes, reads and read and write traffic.
Additionally, both scenarios have proven to be successful, satisfying data integrity, reliability and scalability with different outcomes.
The presentation will discuss so of the best practices in determining whether to put you MySQL instances on Amazon RDS, Amazon Aurora or just leave it on-premise. The session will go into details of the pros vs cons of each platform such as performance, versioning, limitations and more. After this session, you will be equipped with
In this session, we will dive deep into the unique features and changes that make up Aurora PostgreSQL -
including understanding the architectural differences that contribute to improved scalability, availability and durability. Some of the items that we will cover are the elimination of checkpointing, removal of the log buffer and the use of a 4/6 quroum to improved durability and availability while reducing jitter.
Other areas we will cover are improvements in vacuum and shared buffer cache as well some of our new features like Fast Clones and Performance Insight.
To finish off the session we will walk through the techniques available to migrate to Aurora PostgreSQL.
In this session, we will deep dive into the exciting features of Amazon RDS for PostgreSQL, including new versions of PostgreSQL releases, new extensions and larger instances. We will also show benchmarks of new RDS instance types and their value proposition. We will also look at how high availability and read scaling works on RDS PostgreSQL. We will also explore lessons we have learned managing a large fleet of PostgreSQL instances, including important tunables and possible gotchas around pg_upgrade.
How could Amazon Migration Service work in your environment to migrate away from that proprietary colossus?
How does it work?
Why would you use this tool, why would u avoid it.
What components does it have?
How does it perform?
These are all questions which are answered during this talk. My goal is to provide you with an overview of it's functionalities and explain you my findings.