ProxySQL is an open source proxy for MySQL that is able to provide high availability (HA) and high performance with no changes in the application, using several built-in features and integration with clustering software.
Those are only a few of the features you'll learn about in this hands-on tutorial.
Performance Schema in MySQL is maturing from version to version. It includes extended lock instrumentation, memory usage statistics, new tables for server variables, first time ever instrumentation for user variables, prepared statements and stored routines.
Version 8.0 adds additional variables, replication, error messages, data locks instrumentation. A lot! Amazing! And complicated!
In this tutorial, we will try all these instruments out. We will provide a test environment and a few typical problems that would be difficult to solve before MySQL 5.7. Just few examples:
- "Where is memory going?"
- "Why are these queries hanging?"
- "How huge is the overhead of my stored procedures?"
- "Why are queries waiting for metadata locks?"
You will not only learn how to collect and use this information but will gain practical experience with it. You will also learn many details on how to setup Performance Schema.
The world is becoming increasingly complex for professionals working with data. The amount and diversity of data are on a level never before seen, and we often have to use different tools to store the increasing amount of data, for instance, MySQL and PostgreSQL.
It brings us another problem: how to make this ecosystem work properly? Is it possible to make them talk to each other? Is it possible to replicate from MySQL to PostgreSQL or vice-versa?
This presentation aims to do that, make MySQL talk to PostgreSQL and make them work together in a heterogeneous setup. We will see different tools and techniques like PostgreSQL Logical Decoding, SymmetricDS, pg_chameleon, Tungsten Replicator and in the end, we will have a heterogeneous setup with MySQL and PostgreSQL replicating from each other.
MySQL and PostgreSQL are the two most popular open-source relational databases. To help choosing between them, a comparison of their query optimizers has been carried out. The aim of this session is to summarize the outcome of the comparison. Specifically, to point out optimizer-related strengths and weaknesses.
In this session, we will cover a number of the way that you can tune PostgreSQL to better handle high write workloads. We will cover both application and database tuning methods as each type can have substantial benefits but can also interact in unexpected ways when you are operating at scale.
On the application side, we will look at write batching, use of GUID's, general index structure, the cost of additional indexes and impact of working set size. For the database, we will see how wal compression, auto vacuum and checkpoint settings, as well as a number of other configuration parameters, can greatly affect the write performance of your database and application.
While most applications are aware of the minimum basic security features, there is often a lack of understanding about how best to manage them, especially with major security features being released with every major version of Postgres. As for advanced features, sadly most of them go unnoticed and unused in most cases. This talk will cover the various features that Postgres provides for data security, from the very basic to the most advanced:
- Postgres HBA and types of authentications
- Permissions and ACL in Postgres
- Row-level security
- Event triggers
- PCI security implementation techniques
- Filesystem permission options
- Data encryption management in Postgres
- Table level auditing and storage efficiency
- Monitoring for SQL injections
- Other PostgreSQL security features
- Tips for security enhancement for Postgres as a Service users (RDS, GCE, Azure Postgres)
- Upcoming security features in Postgres 11
- Features that Postgres currently lacks
Learn how to monitor PostgreSQL using PMM (Percona Monitoring and Management) so that you can:
* gain greater visibility of performance and bottlenecks PostgreSQL
* Consolidate your PostgreSQL servers into the same monitoring platform you already use for MySQL and MongoDB
* Respond more quickly and efficiently in Severity 1 issues
We'll show how using PMM's External Exporters functionality that you can have PostgreSQL integrated in only minutes!
A look at the latest production release PG10 and the latest features in the forthcoming PG11.
PostgreSQL version 10 has added logical replication, and now the field of replication options in PostgreSQL has gotten wide: Streaming replication, warm standby, logical replication?
We'll discuss what the options are, their limitations and pitfalls, and what the best use-case for each one is. We'll show what it takes to set each one up, monitor it, and get it working again on failures. We'll cover:
* The history of replication in PostgreSQL.
* WAL shipping
* Streaming replication
* Trigger-based replication
* Logical decoding
* And some exotic animals
This session is a review of the various ways in which PostgreSQL allows you to distribute your data across multiple nodes: remote data access, replication, sharding, distributed query and multi-master.
POLARDB provides read scale out on shared everything architecture. It features 100% backward compatibility with MySQL 5.6 and the ability to expand the capacity of a single database to over 100TB. Users can expand the computing engine and storage capability in just a matter of seconds! PolarDB offers a 6x performance improvement over MySQL 5.6 and a significant drop in costs compared to other commercial databases.
POLARDB leverages InnoDB's redo logs for physical replication. InnoDB stores physical page level operations in redo logs for crash recovery. POLARDB extends this functionality to deploy multiple read replicas for read load sharing.
In this talk we'll take a deep dive into InnoDB internals and explain the changes we made to the core InnoDB code. We'll touch upon design issues around logging, crash recovery, buffer pool management, MVCC, DDL synchronization etc.
This talk will be mostly about the core internals of InnoDB. Some basic knowledge of internals like redo logs, undo logs, read view (transaction isolation), purge and buffer pool management will be very helpful.
At Datadog we handle trillions of points of data per day from the thousands of customers that rely on us to monitor their applications and infrastructure. In this session, I'll share how we've scaled PostgreSQL to not only handle the deluge of data, but how we've made our PostgreSQL systems more resilient.
I'll also discuss which metrics to watch and how troubleshooting based on those metrics will help you solve problems more quickly. In this session, we will look at a framework for your metrics and how to use it to find solutions to the issues that come up.
We will cover the three types of monitoring data: what to collect, what should trigger an alert (avoiding an alert storm and pager fatigue), and how to follow the resources to find the root causes of problems.
This focus of this session is not tool-specific, so attendees will leave with strategies and frameworks they can implement in environments today regardless of the platforms and tools they use.