The world is becoming increasingly complex for professionals working with data. The amount and diversity of data are on a level never before seen, and we often have to use different tools to store the increasing amount of data, for instance, MySQL and PostgreSQL.
It brings us another problem: how to make this ecosystem work properly? Is it possible to make them talk to each other? Is it possible to replicate from MySQL to PostgreSQL or vice-versa?
This presentation aims to do that, make MySQL talk to PostgreSQL and make them work together in a heterogeneous setup. We will see different tools and techniques like PostgreSQL Logical Decoding, SymmetricDS, pg_chameleon, Tungsten Replicator and in the end, we will have a heterogeneous setup with MySQL and PostgreSQL replicating from each other.
MySQL and PostgreSQL are the two most popular open-source relational databases. To help choosing between them, a comparison of their query optimizers has been carried out. The aim of this session is to summarize the outcome of the comparison. Specifically, to point out optimizer-related strengths and weaknesses.
In this session, we will cover a number of the way that you can tune PostgreSQL to better handle high write workloads. We will cover both application and database tuning methods as each type can have substantial benefits but can also interact in unexpected ways when you are operating at scale.
On the application side, we will look at write batching, use of GUID's, general index structure, the cost of additional indexes and impact of working set size. For the database, we will see how wal compression, auto vacuum and checkpoint settings, as well as a number of other configuration parameters, can greatly affect the write performance of your database and application.
While most applications are aware of the minimum basic security features, there is often a lack of understanding about how best to manage them, especially with major security features being released with every major version of Postgres. As for advanced features, sadly most of them go unnoticed and unused in most cases. This talk will cover the various features that Postgres provides for data security, from the very basic to the most advanced:
- Postgres HBA and types of authentications
- Permissions and ACL in Postgres
- Row-level security
- Event triggers
- PCI security implementation techniques
- Filesystem permission options
- Data encryption management in Postgres
- Table level auditing and storage efficiency
- Monitoring for SQL injections
- Other PostgreSQL security features
- Tips for security enhancement for Postgres as a Service users (RDS, GCE, Azure Postgres)
- Upcoming security features in Postgres 11
- Features that Postgres currently lacks
Learn how to monitor PostgreSQL using PMM (Percona Monitoring and Management) so that you can:
* gain greater visibility of performance and bottlenecks PostgreSQL
* Consolidate your PostgreSQL servers into the same monitoring platform you already use for MySQL and MongoDB
* Respond more quickly and efficiently in Severity 1 issues
We'll show how using PMM's External Exporters functionality that you can have PostgreSQL integrated in only minutes!
A look at the latest production release PG10 and the latest features in the forthcoming PG11.
PostgreSQL version 10 has added logical replication, and now the field of replication options in PostgreSQL has gotten wide: Streaming replication, warm standby, logical replication?
We'll discuss what the options are, their limitations and pitfalls, and what the best use-case for each one is. We'll show what it takes to set each one up, monitor it, and get it working again on failures. We'll cover:
* The history of replication in PostgreSQL.
* WAL shipping
* Streaming replication
* Trigger-based replication
* Logical decoding
* And some exotic animals
This session is a review of the various ways in which PostgreSQL allows you to distribute your data across multiple nodes: remote data access, replication, sharding, distributed query and multi-master.
At Datadog we handle trillions of points of data per day from the thousands of customers that rely on us to monitor their applications and infrastructure. In this session, I'll share how we've scaled PostgreSQL to not only handle the deluge of data, but how we've made our PostgreSQL systems more resilient.
I'll also discuss which metrics to watch and how troubleshooting based on those metrics will help you solve problems more quickly. In this session, we will look at a framework for your metrics and how to use it to find solutions to the issues that come up.
We will cover the three types of monitoring data: what to collect, what should trigger an alert (avoiding an alert storm and pager fatigue), and how to follow the resources to find the root causes of problems.
This focus of this session is not tool-specific, so attendees will leave with strategies and frameworks they can implement in environments today regardless of the platforms and tools they use.