AWS Aurora is one of the promising cloud-based RDBMS Solution. The main reason for the Aurora success is because it's based on InnoDB storage engine.
In this session, We are going to talk about how can you plan for Aurora migration in an efficient way using Terraform & Percona Products. We will be sharing our Terraform code to launch AWS Aurora clusters, tricks to check data consistency, migration path and effective monitoring using PMM.
The abstract of the session includes :
1. Why AWS Aurora? Future of AWS Aurora?
2. Build Aurora Infra
i. Using Terraform (Without Data)
ii. Restore Using Terraform & Percona XtraBackup (Using AWS S3 Bucket)
3. Verify data Consistency
4. Aurora Migration
i. 1:1 Migration
ii. Many:1 migration using Percona Server multi-source replication
5. Show Benchmarks and PMM dashboard
Security is an issue, controlling what/who can access your data is a must, and how to do it is a nightmare.
But if you follow us in this journey, you will discover how implementing a quite robust protection is possible.
Even more is possible and your performances will improve. Cool right?
We will discuss:
- Implement selective query access
- Define accessibility by location/ip/id
- Reduce to minimum cost of filtering
- Automate the query discovery
Alibaba Database Team is developing X-DB, the next generation distributed database to support the world's largest and still fast-growing e-commence platform. Substantial innovations have been made on different aspects of X-DB to enhance its capability and performance since last year. In this talk we'll introduce what we've accomplished in SQL Engine.
* Plan Cache: a plus to MySQL Engine which boosts QPS by up to 170% on sysbench workloads and 34% on Alibaba's online purchasing system. We'll explain details of how it is created and used to skip heavy index dives at optimization stage, the efficient cache management, and the performance benefits.
* A state-of-art distributed SQL processing framework: the key component of X-DB built on top of MySQL Engine. We'll take a deep dive into the architecture, implementation details and leveraged technologies(i.e. high-performance RPC and scheduling sub-systems).
We'll also share the achievement and lessons learnt so far, as well as our roadmap.
Offering MySQL, PostgreSQL and MariaDB database services in the cloud is different than doing so on-premise. Latency, connection redirection, optimal performance configuration are just a few challenges. In this session, Jun Su will walk you through Microsoft's journey to not only offer these popular OSS RDBMS in Microsoft Azure, but how it is implemented as a true DBaaS. Learn about Microsoft's Azure Database Services platform architecture, and how these services are built to scale.
Existing tools like mysqldump and replication cannot migrate data between GTID-enabled MySQL and non-GTID-enabled MySQL -- a common configuration across multiple cloud providers that cannot be changed. These tools are also cumbersome to operate and error-prone, thus requiring a DBA's attention for each data migration. We introduced a tool that allows for easy migration of data between MySQL databases with constant downtime on the order of seconds.
Inspired by gh-ost, our tool is named Ghostferry and allows application developers at Shopify to migrate data without assistance from DBAs. It has been used to rebalance sharded data across databases. We plan to open source Ghostferry at the conference so that anyone can migrate their own data with minimal hassle and downtime. Since Ghostferry is written as a library, you can use it to build specialized data movers that move arbitrary subsets of data from one database to another.
Determining the best and most suitable relational database management system (RDBMS) for a given project isn't an easy task and can be rather challenging at times. It is like benchmarking fast cars created by different racing teams!
The presentation will compare, using a large body of experimental results, two highly-available cloud closed-source products: Amazon Aurora MySQL and RDS MySQL. Both are based on the Open Source MySQL Edition.
Both use cases have demonstrated that MySQL is a great solution for concurrent writes, reads and read and write traffic.
Additionally, both scenarios have proven to be successful, satisfying data integrity, reliability and scalability with different outcomes.
The presentation will discuss so of the best practices in determining whether to put you MySQL instances on Amazon RDS, Amazon Aurora or just leave it on-premise. The session will go into details of the pros vs cons of each platform such as performance, versioning, limitations and more. After this session, you will be equipped with
In this session, we will dive deep into the unique features and changes that make up Aurora PostgreSQL -
including understanding the architectural differences that contribute to improved scalability, availability and durability. Some of the items that we will cover are the elimination of checkpointing, removal of the log buffer and the use of a 4/6 quroum to improved durability and availability while reducing jitter.
Other areas we will cover are improvements in vacuum and shared buffer cache as well some of our new features like Fast Clones and Performance Insight.
To finish off the session we will walk through the techniques available to migrate to Aurora PostgreSQL.
In this session, we will deep dive into the exciting features of Amazon RDS for PostgreSQL, including new versions of PostgreSQL releases, new extensions and larger instances. We will also show benchmarks of new RDS instance types and their value proposition. We will also look at how high availability and read scaling works on RDS PostgreSQL. We will also explore lessons we have learned managing a large fleet of PostgreSQL instances, including important tunables and possible gotchas around pg_upgrade.
How could Amazon Migration Service work in your environment to migrate away from that proprietary colossus?
How does it work?
Why would you use this tool, why would u avoid it.
What components does it have?
How does it perform?
These are all questions which are answered during this talk. My goal is to provide you with an overview of it's functionalities and explain you my findings.