In the last 20 years, researchers and vendors have built advisory tools to assist DBAs in tuning and physical design. Most of this previous work is incomplete because they require humans to make the final decisions about any database changes and are reactionary measures that fix problems after they occur. What is needed for a "self-driving" DBMS are components that are designed for autonomous operation. This will enable new optimizations that are not possible today because the complexity of managing these systems has surpassed the abilities of humans.
In this talk, I present the core principles of an autonomous DBMS based on reinforcement learning. These are necessary to support ample data collection, fast state changes, and accurate reward observations. I will discuss techniques on how to build a new autonomous DBMS or retrofit an existing one. Our work is based on our experiences at CMU from developing an automatic tuning service (OtterTune) and our self-driving DBMS (Peloton).
If you are new to Percona XtraDB Cluster, or haven't heard about it before but would like to meet it, then this is the session for you. We will try to understand:
- What is Percona XtraDB Cluster?
- Is it useful for your use-case?
- What are the important features of Percona XtraDB Cluster (including recent 5.7 features)
This session will cover what makes Percona XtraDB Cluster enterprise-ready and one of the most popular products when it comes to the clustering solution space. I am sure you will fall in love with it.
Twitter has been using their own fork of MySQL for many years. Last year the team decided to migrate to the community version of MySQL 5.7 and abandoned their own version. The road to the community version was full of challenges.
In this session we will present the motivation and how we came out with the decision. We will also discuss the challenges and surprises encountered and how we overcome them. Finally, we will talk about the lessons learned, recommendations and our future plans.
Redundancy and high availability are the basis for all production deployments. Database systems with large data sets or high throughput applications can challenge the capacity of a single server like CPU for high query rates or RAM for large working sets. Adding more CPU and RAM for vertical scaling is limited. Systems need horizontal scaling by distributing data across multiple servers.
MongoDB supports horizontal scaling through sharding. Each shard consists of a replica set that provides redundancy and high availability. In this session we will talk about:
-How MongoDB HA works
-Replica sets components/deployment typologies
-Cluster components - mongos, config servers and shards/replica set
-Shard keys and chunks
-Hashed vs. range based sharding
-Reads vs. writes on sharded cluster
AWS Aurora is one of the promising cloud-based RDBMS Solution. The main reason for the Aurora success is because it's based on InnoDB storage engine.
In this session, We are going to talk about how can you plan for Aurora migration in an efficient way using Terraform & Percona Products. We will be sharing our Terraform code to launch AWS Aurora clusters, tricks to check data consistency, migration path and effective monitoring using PMM.
The abstract of the session includes :
1. Why AWS Aurora? Future of AWS Aurora?
2. Build Aurora Infra
i. Using Terraform (Without Data)
ii. Restore Using Terraform & Percona XtraBackup (Using AWS S3 Bucket)
3. Verify data Consistency
4. Aurora Migration
i. 1:1 Migration
ii. Many:1 migration using Percona Server multi-source replication
5. Show Benchmarks and PMM dashboard
In this session, Geir will describe the new key features that have all ready been announced for MySQL 8.0.
In addition to Data Dictionnary, CTEs and Windows function the session is covering:
* Move to utf8(mb4) as MySQL's default character set
* Language specific case insensitive collation for 21 languages (utf8)
* Invisible index
* Descending indexes
* Improve usability of UUID and IPV6 manipulations
* SQL roles
* SET PERSIST for global variable values
* Performance Schema, instrumenting data locks
* Performance Schema, instrumenting error messages
* Improved cost model with histograms
The presentation ends with some words on scalability, plugin infrastructure and GIS.
The world is becoming increasingly complex for professionals working with data. The amount and diversity of data are on a level never before seen, and we often have to use different tools to store the increasing amount of data, for instance, MySQL and PostgreSQL.
It brings us another problem: how to make this ecosystem work properly? Is it possible to make them talk to each other? Is it possible to replicate from MySQL to PostgreSQL or vice-versa?
This presentation aims to do that, make MySQL talk to PostgreSQL and make them work together in a heterogeneous setup. We will see different tools and techniques like PostgreSQL Logical Decoding, SymmetricDS, pg_chameleon, Tungsten Replicator and in the end, we will have a heterogeneous setup with MySQL and PostgreSQL replicating from each other.
DC/OS (Datacenter Operating System) is a novel solution to simply and efficiently run services and applications across on-premises, cloud or hybrid environments. Even though you can run any type of applications, DC/OS is an excellent fit for container based architectures.
Deploying a MySQL instance on DC/OS requires just a few clicks, but once the service is running, you may have a few questions, such as:
-"What" is actually running?
-How is data persistence is handled?
-How can I enable HA for my databases
The goal of this session is to help answer these questions and share some of the thought processes and lessons learned from a real-world implementation, provide the background of how DC/OS operates, and identify key items that need to be considered when deploying database services.
Braze is a lifecycle engagement platform used by consumer brand to deliver great customer experiences to over 1 billion monthly active users. In this talk, co-founder and CTO Jon Hyman will go over multiple production use cases Braze uses for buffering data to Redis for efficient real-time processing.
Braze processes more than a third of a trillion pieces of data each month when generating time series analytics for its customers. Jon will describe how each of these events gets buffered to Redis hashes, and some to Redis sets, before ultimately flushed to Braze's analytics database hundreds of thousands of times per minute. This talk will also discuss how Redis sets are the cornerstore of Canvas, Braze's user journey orchestration product used by brands such as OKCupid, Postmates, and Microsoft.
Lastly, Jon will cover how Braze has written its own application-based sharding for Redis in order to support the millions of operations per second that Braze needs to handle its daily volume.
Continuent Tungsten Replicator enables you to effectively move data in real time from your database source into various targets. You can move data from your MySQL and Oracle transactional stores into your data warehouse, or into HPE Vertica or Hadoop. Furthermore, this can be performed from multiple sources into a single target, modifying and augmenting the data so that the information can be analyzed both in total and for the individual sources and hosts. This allows you to more effectively analyze your data without losing information, and because the data can be filtered, you can be secure in the knowledge that the information can be analyzed without releasing personal data. In this session, we'll examine the replication process and how the data can be concentrated and combined without losing the source identity information.
Designing highly available database systems isn't a new topic. On the contrary, with so many options available to architects now, choosing the best fit isn't always trivial. This talk will walk through various options including Standard Multi A/Z MySQL RDS, Aurora for RDS, and Percona XtraDB Cluster in EC2. Comparison points will cover failover processes, accompanying software (primarily ProxySQL), and general use cases.
This is not meant to be a deep dive into any particular design, but rather assist in choosing the proper architecture for a given use case.
The JSON data type and functions that support it comprise one of the most interesting features introduced in MySQL 5.7 for application developers. But no feature is a "Golden Hammer." We need to apply a little expertise to get the best result, and avoid misusing it. I'll show practical examples that work well with JSON, and other scenarios where conventional columns perform better.
Questions addressed in this presentation:
- How much space does JSON data use, compared to conventional data?
- What is the performance of querying JSON vs. conventional data?
- How do I create indexes for JSON data?
- What kind of data is best to store in JSON?
- How do I get the best of both worlds?
By now you surely know that MongoDB 3.6 Community became generally available on Dec 5, 2017. Of course, this is great news: it has some big ticket items that we are all excited about! But we need to talk about general thoughts on this release good an
It is always a good idea for your internal teams to study and consider new versions. This is crucial for understanding if and when you should start using it. After deciding to use it, there is the question of if you want your developers using the new features (or are they not suitably implemented yet to be used)?
So what is in MongoDB 3.6 Community?
- Change streams
- Retryable writes
- Security improvement
- Major love for arrays in aggregation
- A better balancer
- JSON Schema validation
- Better time management
- Compass is community
- Major WiredTiger internal overhaul
As you can see, this is an extensive list. But there are 1400+ implemented Jira tickets just on the server (not even in the WiredTigers project).
This is a big so come hungry!
Security is an issue, controlling what/who can access your data is a must, and how to do it is a nightmare.
But if you follow us in this journey, you will discover how implementing a quite robust protection is possible.
Even more is possible and your performances will improve. Cool right?
We will discuss:
- Implement selective query access
- Define accessibility by location/ip/id
- Reduce to minimum cost of filtering
- Automate the query discovery
This session will be interesting to everyone looking for the latest news about MySQL 8.0 performance:
- MySQL 8.0 is more and more close to GA now
- But what about MySQL 8.0 performance ? ;-)
The latest benchmark results obtained with MySQL 8.0 will be the center of the talk because every benchmark workload for MySQL is a "problem to resolve" and each resolved problem is a potential gain in your production!
Many important internal design changes are coming with MySQL 8.0:
- How to bring them in action most efficiently?
- What kind of trade-offs to expect, what is already good, and what is "not yet"?
- How well is MySQL 8.0 able to use the latest HW?
- Could you really speed-up your IO by deploying your data on the latest flash storage?
These and many other questions are answered during this talk, plus proven by benchmark results.
MySQL and PostgreSQL are the two most popular open-source relational databases. To help choosing between them, a comparison of their query optimizers has been carried out. The aim of this session is to summarize the outcome of the comparison. Specifically, to point out optimizer-related strengths and weaknesses.
An increasing number of stateless applications are being deployed in containers and managed by cluster schedulers. Until recently, stateful services like database have been overlooked by many container-based orchestration systems.
In this talk, Bryant and Joshua will discuss the trade-offs of containerizing databases and survey the current support for databases in open source projects like Kubernetes and Apache Mesos. They will also dive into the architecture and operational concerns addressed by the tools they built to deploy and manage MySQL, PostgreSQL, and Redis in Docker containers as part of the Database team at New Relic.
Percona Monitoring and Management is quickly growing in both functionality and adoption. In this talk, we will discuss how to integrate it with other Database Management Systems, to get the best out of it on environments that not only use MySQL- or MongoDB-based solutions.
We will focus on how to set up monitoring for PostgreSQL and ClickHouse.
Azure provides fully managed, enterprise-ready community MySQL and PostgreSQL services that are built for developers and devops engineers. These services use the community database technologies you love and enable you to focus on your apps instead of management and administration burden. In this session, we will walk you through service capabilities such as built-in high availability, security, and elastic scaling of performance that allow you to optimize your time and save costs. We will demonstrate how the service integrates with the broader Azure platform enabling you to deliver innovative and new experience to your users. The talk will cover best practices and real customer examples to demonstrate the benefits and how you can easily migrate your databases to the managed service.
Do you have a new project for which you need to choose a DBMS? Do you need to evaluate moving to MySQL from MongoDB, or vice versa? In this talk, we'll show the strengths and weaknesses of both databases, and the best usage for each one.
At Square we operate several thousand MySQL instances to power a financial network, from payments to payroll. In a word: money. "Mission-critical" isn't critical enough. Come learn how we operate MySQL with billions of dollars at stake. We'll look at everything: configuration, management, monitoring, tooling, security, high-availability, replicaiton, etc.
This session will overview how we leverage old technology with modern solutions that helped us to improve and optimize documents manipulation using MongoDB.
Alibaba Database Team is developing X-DB, the next generation distributed database to support the world's largest and still fast-growing e-commence platform. Substantial innovations have been made on different aspects of X-DB to enhance its capability and performance since last year. In this talk we'll introduce what we've accomplished in SQL Engine.
* Plan Cache: a plus to MySQL Engine which boosts QPS by up to 170% on sysbench workloads and 34% on Alibaba's online purchasing system. We'll explain details of how it is created and used to skip heavy index dives at optimization stage, the efficient cache management, and the performance benefits.
* A state-of-art distributed SQL processing framework: the key component of X-DB built on top of MySQL Engine. We'll take a deep dive into the architecture, implementation details and leveraged technologies(i.e. high-performance RPC and scheduling sub-systems).
We'll also share the achievement and lessons learnt so far, as well as our roadmap.
We will discuss Percona Server 8.0: features, performance, changes from 5.7, comparison with MySQL 8.0.
In this session, we will cover a number of the way that you can tune PostgreSQL to better handle high write workloads. We will cover both application and database tuning methods as each type can have substantial benefits but can also interact in unexpected ways when you are operating at scale.
On the application side, we will look at write batching, use of GUID's, general index structure, the cost of additional indexes and impact of working set size. For the database, we will see how wal compression, auto vacuum and checkpoint settings, as well as a number of other configuration parameters, can greatly affect the write performance of your database and application.
In this session, John Jainschigg, Master of the Universe, and Bill Bauman, Head of Innovation and Strategy at Opsview, will review the steps taken to build a simple Kubernetes infrastructure to support serverless workloads.
The infrastructure includes an installation of Percona DB for writing form, structured data.
This is relatively high level, focused on the capabilities and possibilities of what can be accomplished using modern technologies including orchestrated containers and serverless infrastructure in your own datacenter. A key point of interest during the discovery phase was, how do we monitor this thing?
We're eager to discuss your ideas, questions and comments, so join us!
The earliest relational databases were monolithic on-premise systems that were powerful and full-featured. Fast forward to the Internet and NoSQL: BigTable, DynamoDB and Cassandra. These distributed systems were built to scale out for ballooning user bases and operations. As more and more companies vied to be the next Google, Amazon, or Facebook, they too ?required? horizontal scalability.
But in a real way, NoSQL and even NewSQL have forgotten single node performance where scaling out isn't an option. And single node performance is important because it allows you to do more with much less. With a smaller footprint and simpler stack, overhead decreases and your application can still scale.
In this talk, we describe TimescaleDB's methods for single node performance. The nature of time-series workloads and how data is partitioned allows users to elastically scale up even on single machines, which provides operational ease and architectural simplicity, especially in cloud environments.
The presentation will be a real-life study on how we use PMM for monitoring of 120+ MySQL and ProxySQL-servers, as well as query optimization
We will show how we've been able to deploy PMM on a large scale, with 120+ (and growing) monitored instances, as well as how we're using it to find problems, both system health and performance-wise, sometimes even before impacting the production environment.
During the project, we found a few caveats, that others embarking this journey should be aware of. We've also found a few ?hidden features?, where we're able to use PMM in ways beyond the standard interfaces, due to the fact that it's all built on open and battle-tested software.
Keeping data safe is the top responsibility of anyone running a database. Learn how the Google Cloud SQL team protects against data loss. Cloud SQL is Google's fully-managed database service that makes it easy to set up and maintain MySQL databases in the cloud. In this session, we'll dive into Cloud SQL's storage architecture to learn how we check data down to the disk level. We will also discuss MySQL checksums and infrastructure Cloud SQL uses to verify that checksums for data files are accurate without affecting performance of the database.
Aimed at beginners, this conference talk will cover the basics behind indexes in MySQL. How do you use them, how do you know if MySQL used them, and how effective is a particular index? These questions and more will be answered.
MySQL replication allows you to write on one writer server and easily scale out reads by redirecting reads to reader servers. But how do we guarantee read consistency with their last write? Galera replication can guarantee that, while MySQL Group Replication and standard MySQL async replication cannot.
If you are running MySQL Server or Percona Server, version 5.7 or newer, with GTID enabled, ProxySQL 2.0 is now able to ensure read consistently with the last write. ProxySQL is able to stream GTID information from all the reader servers, and in real-time is able to determine which reader server(s) is able to execute the SELECT statement producing a resultset that is read consistent with the last write (and GTID) executed by each client.
This presentation will show the technical details that allow you to build an architecture with thousands of ProxySQL instances and MySQL servers, and how GTID information is processed in real-time with limited bandwidth footprint.
You need to store and retain time-series data in MongoDB and HDDs can't keep up with your insert rate, but you can't afford to keep everything on SSD? Then this presentation is for you. You'll learn how to use sharding and shard tagging to keep your inserts and most recent data on the fast SSDs and your archived data on the cheap HDDs, and how to quickly and efficiently transition data from SSD to HDD. You'll also learn the best programming techniques for adding your time-series data and accessing the data as a stream without missing any data points.
Offering MySQL, PostgreSQL and MariaDB database services in the cloud is different than doing so on-premise. Latency, connection redirection, optimal performance configuration are just a few challenges. In this session, Jun Su will walk you through Microsoft's journey to not only offer these popular OSS RDBMS in Microsoft Azure, but how it is implemented as a true DBaaS. Learn about Microsoft's Azure Database Services platform architecture, and how these services are built to scale.
The next version of MySQL will be a major release of new features and capabilities, including a new data dictionary hosted in InnoDB, new REDO logs design, new UNDO logs, new scheduler, descending indexes, and much more!
Learn all about the changes coming in the next version of InnoDB delivered with MySQL 8.0 !
While most applications are aware of the minimum basic security features, there is often a lack of understanding about how best to manage them, especially with major security features being released with every major version of Postgres. As for advanced features, sadly most of them go unnoticed and unused in most cases. This talk will cover the various features that Postgres provides for data security, from the very basic to the most advanced:
- Postgres HBA and types of authentications
- Permissions and ACL in Postgres
- Row-level security
- Event triggers
- PCI security implementation techniques
- Filesystem permission options
- Data encryption management in Postgres
- Table level auditing and storage efficiency
- Monitoring for SQL injections
- Other PostgreSQL security features
- Tips for security enhancement for Postgres as a Service users (RDS, GCE, Azure Postgres)
- Upcoming security features in Postgres 11
- Features that Postgres currently lacks
In this session, we will discuss our fully automated failover solution running in containers on Kubernetes. Using Orchestrator for MySQL failovers, ProxySQL to route queries and a Zookeeper-backed application we wrote called Taiji for service discovery, database failures and topology changes are handled without any human intervention. This system is tolerant to network partitions and connectivity issues, node failures, and even full region outages.
After adding additional functionality to Orchestrator, we have it deployed with the raft consensus protocol and automatic failovers enabled. ProxySQL is deployed alongside a Taiji container that watches for changes in Zookeeper. All topology changes are automatically pushed to Zookeeper via Orchestrator callback scripts and a Taiji agent that performs health checks on databases. In less than a second, these changes are pushed to ProxySQL, so our application will seamlessly begin sending read and write queries to the proper database.
ClickHouse is an open source analytical DBMS. It is capable of storing petabytes of data and processing billions of rows per second per server, all while ingesting new data in real-time.
I will talk about ClickHouse internal design and unique implementation details that allow us to achive the maximum performance of query processing and data storage efficiency.
While GitHub isn't the biggest database around in terms of the amount of data we hold in MySQL, it is among the top 50 busiest sites on the internet. Facing an immediate need to distribute load, we came up with creative ways to move a significant amount of traffic off of our main MySQL cluster, with no user impact. Moving five of our hottest tables required collaboration between engineers, DBAs and SRE. This talk will describe when and how to do it, and prove it to be an efficient database scalability solution.
Moving tables required changes to our database infrastructure as well as our application. I'll explain the impetus for this work and why we did it. We'll walk through the application-level changes that allowed us to change connections while still serving data. Then, I'll discuss the ways we moved tables to different clusters, using MySQL replication, or in some cases, temporary sharding and copying billions of rows. Finally, I'll outline the orchestration of the actual cutovers.
Accelerating MySQL with Just-In-Time (JIT) compilation is emerging as a quick and easy way to achieve greater efficiencies with MySQL. In this talk, l'll go over the benefits and caveats of using Dynimizer, a binary-to-binary JIT compiler, with MySQL workloads. I'll discuss how to identify situations where JIT compilation can help, how to get setup and running, and go over benchmark results along with other performance metrics. We'll also peek under the hood and take a look at what's happening at a lower level.
In this 101 topic, we are going to discuss the advantages and disadvantages of MongoDB.
For who never worked with MongoDB and are considering migrating to this database this talk will give you a real overview of the features and how to avoid common mistakes and misconception.
The full title of this presentation should be: "Save some bandwidth by not transmitting the full resultset metadata over the wire when you don't need it. " Indeed, one the latest features in the MySQL protocol allows you to save some network bandwidth by not sending the metadata with the resultsets for which you know the metadata.
Join this talk to learn how to turn this on, and how much data does it save per query.
This presentation will discuss implementing external authentication when using Percona Server for MongoDB and MongoDB Enterprise. It will review authentication using OpenLDAP or ActiveDirectory and ActiveDirectory with Kerberos.
The presentation will also include examples of the configurations required by these external directory services. It will also review the LDAP Authorization features introduced in MongoDB Enterprise 3.4.
Existing tools like mysqldump and replication cannot migrate data between GTID-enabled MySQL and non-GTID-enabled MySQL -- a common configuration across multiple cloud providers that cannot be changed. These tools are also cumbersome to operate and error-prone, thus requiring a DBA's attention for each data migration. We introduced a tool that allows for easy migration of data between MySQL databases with constant downtime on the order of seconds.
Inspired by gh-ost, our tool is named Ghostferry and allows application developers at Shopify to migrate data without assistance from DBAs. It has been used to rebalance sharded data across databases. We plan to open source Ghostferry at the conference so that anyone can migrate their own data with minimal hassle and downtime. Since Ghostferry is written as a library, you can use it to build specialized data movers that move arbitrary subsets of data from one database to another.
There are substantial improvements in the Optimizer in MySQL 8.0. Most noticeably, we have added support for advanced SQL features like common table expressions, windowing functions and grouping() function. We also made DBAs' lives easier with invisible indexes, and additional hints that can be used together with the query rewrite plugin.
On the performance side, cost model changes will make a huge impact. We have made JSON support even more powerful by adding JSON table function, aggregation functions and more. Come and learn about new features in MySQL 8.0!
Learn how to monitor PostgreSQL using PMM (Percona Monitoring and Management) so that you can:
* gain greater visibility of performance and bottlenecks PostgreSQL
* Consolidate your PostgreSQL servers into the same monitoring platform you already use for MySQL and MongoDB
* Respond more quickly and efficiently in Severity 1 issues
We'll show how using PMM's External Exporters functionality that you can have PostgreSQL integrated in only minutes!
This talk is about measuring and reducing noise in benchmark results. Properly tuning the operating system and hardware to achieve stable results in benchmarks becomes an art in itself these days. There may be many reasons for that:
- jitter in CPU and I/O schedulers
- dynamic CPU frequency scaling
- process address space randomization
- kernel configuration
If you are not seeing stable results in your performance comparisons, you are wasting your time. Since I do a lot of MySQL benchmarks as a part of my job, I have collected a number of recipes to measure and reduce system noise and achieve more stable numbers in benchmarks. I'm going to describe those recipes as well as the new sysbench module implemented to automate those tasks and simplify system tuning for other people.
ClickHouse is very fast and feature rich open source analytics DBMS with multi-petabyte scale. It gained a lot of attention over the last year, thanks to excellent results in benchmarks, conference talks and first successful projects.
After the initial wave of early adopters, the second wave is coming: many companies started to consider ClickHouse as their analytics backend. In this talk I'll review the state of ClickHouse worldwide adoption, share insights about business problems ClickHouse helps to solve efficiently, highlight possible implementation challenges and discuss best practices.
Time series databases are sprouting up like mushrooms. At Grafana Labs, the company behind Grafana, we built a new engine specifically for GrafanaCloud. Why would we do that? Learn about the design considerations, lessons learned, and tradeoffs we made in designing this engine that is compatible with both Graphite and Prometheus.
For the Cloud Environment, we hope MySQL cluster can do the failover and choose a new master node by the instances-self automatically, without third-party middleware. So we built the Raft protocol inside MySQL.
In MySQL-Raft version, every cluster has three nodes, one master and two slaves, we just need to set up a master when cluster is starting. After that, when master node is down, the cluster will choose a new master by Raft protocol, and use Flashback to rollback the committed transactions if needed, to make sure all of the nodes are the same. When failed node back to the cluster or add a new node to the cluster, it will be the slave, Raft will not switch when there are no nodes are down.
Orchestrator uses Raft consensus as of version 3.x. This setup improves the high availability of both the orchestrator service itself as well as that of the managed topologies and allows for easier operations.
This session will briefly introduce Raft consensus concepts, and elaborate on orchestrator's use of Raft: from leader election, through high availability, cross DC deployments and DC fencing mitigation, and lightweight deployments with SQLite.
Of course, nothing comes for free, and we will discuss considerations to using Raft: expected impact, eventual consistency and time-based assumptions.
Orchestrator/Raft is running in production at GitHub, Wix and other large and busy deployments.
MongoDB Cluster is excellent at scale out to support large web traffic. In this session, I will talk about the following topics:
- Typical MongoDB cluster topologies that support large traffic
- Best practices to manage MongoDB clusters, such as add/remove shards from clusters, add/remove indexes, etc.
- Methods for finding bottlenecks and optimizing clusters
Determining the best and most suitable relational database management system (RDBMS) for a given project isn't an easy task and can be rather challenging at times. It is like benchmarking fast cars created by different racing teams!
The presentation will compare, using a large body of experimental results, two highly-available cloud closed-source products: Amazon Aurora MySQL and RDS MySQL. Both are based on the Open Source MySQL Edition.
Both use cases have demonstrated that MySQL is a great solution for concurrent writes, reads and read and write traffic.
Additionally, both scenarios have proven to be successful, satisfying data integrity, reliability and scalability with different outcomes.
In this session, we will explore how we managed to scale Percona XtraDB Cluster. What major issues we found and how we fixed them (a technical walk through). What added advantages this optimization had on overall product, and how Percona XtraDB Cluster is now truly an enterprise-ready clustering solution.
A look at the latest production release PG10 and the latest features in the forthcoming PG11.
Learn about the benefits of running your database on the Mesosphere cloud container platform, and get a preview of what Percona is working on with Mesosphere.
This year the Cassandra team in Instagram has been working on a very interesting project to make Apache Cassandra's storage engine pluggable, and implemented a new RocksDB-based storage engine into Cassandra. The new storage engine can improve the performance of Apache Cassandra significantly, make Cassandra 3-4 times faster in general, and even 100 times faster in some use cases.
In this talk, we will describe the motivation and different approaches we have considered, the high-level design of the solution we choose, also the performance metrics in benchmark and production environments.
Query tuning can be complex. It's often hard to know which knob to turn or button to press to get the biggest performance boost. In this presentation, Janis Griffin, database performance evangelist at SolarWinds, will share her secrets for determining the best approach for tuning queries by utilizing the performance schema (specifically instrumented wait events and thread states), query execution plans, SQL diagramming techniques and more.
Regardless of the complexity of your database or your skill level, this systematic approach will lead you down the correct tuning path with no guessing, saving countless hours of tuning queries and optimizing performance of your MySQL® databases.
? Learn how to effectively use the performance schema to quickly identify bottlenecks and get clues on the best tuning approach
? Quickly identify inefficient operations through review of query execution plans
? Learn how to use SQL diagramming techniques to find the best plan