This talk will explain the steps needed to make connection from Java to MySQL work and highlight potential issues you might encounter. It will cover all components, installation, and configuration.
"I migrated from a proprietary database software to PostgreSQL. I am curious to know whether I can get the same features I used to have in the proprietary database software."
The market coined the term "enterprise grade" or "enterprise ready" to differentiate products and service offerings for licensed database software. For example: there may be a standard database software or an entry-level package that delivers the core functionality and basic features. Likewise, there may be an enterprise version, a more advanced package which goes beyond the essentials to include features and tools indispensable for running critical solutions in production. With such a differentiation found in commercial software, we may wonder whether a solution built on top of an open source database like PostgreSQL can satisfy all the enterprise requirements.
So, in this talk, we shall discuss how you can build an Enterprise Grade PostgreSQL using open source solutions.
We'll discuss a list of Enterprise-grade features that include -
1. Securing your PostgreSQL database cluster
2. High Availability for your PostgreSQL setup
3. Preparing a Backup strategy and the tools available to achieve it
4. Scaling PostgreSQL using connection poolers and load balancers
5. Tools/extensions available for your daily DBA life and detailed logging in PostgreSQL.
6. Monitoring your PostgreSQL and real-time analysis.
Database management systems (DBMSs) are notoriously difficult to deploy and administer because of their long list of functionalities. If a system could optimize itself automatically, then it would remove many of the complications and costs involved with its deployment. Most of the advisory tools built by researchers and vendors are incomplete because they require humans to make the final decisions about any database change and only fix problems after they occur. Recent work has proposed "self-driving" DBMSs that optimize the system for both the application's current workload, as well as the expected workload in the future. These systems will support existing tuning techniques and capacity planning without requiring a human to determine the right way and proper time to deploy them.
The first step towards such an autonomous DBMS is the ability to model and predict the target application's workload. In this talk, I present a robust forecasting framework called "QueryBot 5000" that we designed for self-driving operations. The framework integrates with any DBMS to predict the expected arrival rate of queries in the future based on historical data. It then provides multiple prediction horizons (short- vs. long-term) with varying aggregation intervals. I also discuss our vision and progress on how a self-driving DBMSs uses these forecast models to optimize its performance.
As a company that provides financial services, Square deals with sensitive data on a daily basis, and strong database access control is a core requirement. The task of managing database credentials for 1500+ users across 2000+ clusters manually is extremely tedious and error-prone. Thus, Square developed Lionheart as a microservice to automate much of this work, removing the need for DBAs to manually grant database access to users. Lionheart is responsible for creating and auditing user access. It automatically rotates users, certificates, and grants for both applications and developers every several days. In this talk, we will discuss how to keep your MySQL databases secure, with a discussion on the importance of using TLS encryption, as well as how we leveraged several other open-source tools to make this management easier. We'll discuss the gotchas we ran into, as well as some tips to help you manage your MySQL user access.
PostgreSQL provides a way to communicate with external data sources. This could be another PostgreSQL instance or any other database. The other database might be a relational database, such as MySQL or Oracle; or any NoSQL database such as MongoDB or Hadoop. To achieve this, PostgreSQL implements ISO Standard call SQL-MED in the form of Foreign Data Wrappers (FDW). This presentation will explain in detail how PostgreSQL FDWs work. It will include a detailed explanation of simple features and will introduce more advanced features that were added in recent versions of PostgreSQL. Examples of these would be to show how aggregate pushdown and join pushdown work in PostgreSQL.
The talk will include working examples of these advanced features and demonstrating their use with different databases. These examples show how data from different database flavors can be used by PostgreSQL, including those from heterogeneous relational databases, and showing NoSQL column store joins.
Planning to run MySQL, and want to safeguard data via replication? There are two main choices - traditional MySQL replication or Galera clusters. Which should you choose? We can help.
- Come see recorded demos of breaking replication. We'll demonstrate and compare the different ways MySQL's replication technologies can be thwarted, threatened and thrashed. A taste of what you'll see: A single writer stopping all transactions on a Galera cluster. Rendering a slave useless by neglecting primary keys or abusing metadata locks. Creating inconsistent schemas on a Galera cluster, and more!
- Attend this presentation if you need to understand the pitfalls of each replication strategy, and want to best match MySQL's features to your Application developers' needs.
Our demos use PXC-release, a Cloud Foundry project. CF is an OSS Platform as a Service. Our project gives Operators a reliable, automated way to spin up single node, master-slave, and Galera deployments of Percona Server. We'll share what we've learned: what's working, and what we'd do differently next time.
Since we have extra characters, we're also including our working outline for your consideration:
- Exploring the challenges of GTID
- Temporary tables and GTID (fixed by Percona!)
+ This affects some popular ORM libraries like Hibernate which use temporary tables extensively
- Dealing with errant transactions
+ FLUSH commands can generate GTIDs on a read-only slave
+ ... "How can I have different users on master and slave?"
- Ways to result in a useless slave?
+ Missing primary keys!
+ Single-threaded replication too slow / Multi-threaded Slaves (MTS)
+ Long-running SELECT can block replication (metadata locks)
- Rattlesnakes: Out of sync Slaves
+ binlog_format = STATEMENT
+ Missing users on slave ("stored code")
+ Non-transactional table
+ Memory tables
+ InnoDB vs. Binlog consistency
- How crash safe is replication?
+ "Impossible position" problem
+ how to use relay-log-recovery
- Disk space on master
+ Binary logs generating faster than they are purged
Things that affect both:
- Rope swing: Locking up replication by writing extremely large transactions
- Bottomless pit: Causing a Slave to fall infinitely behind: the Primary Key problem
- Examining multitenancy
+ Online DDL: Galera vs MySQL replication
- Quicksand: Locking up a Galera cluster with a long-running DDL
- Scorpions: Dangers of Rolling Schema Upgrade (RSU)
+ RSU = making a change outside of replication
+ Related to Master/Slave: making DDL changes on slave+switchover to avoid master downtime
- Fire: Inconsistent schemas on Galera
- Crocodiles: Galera deadlocks
In a sharded MongoDB cluster, scale and data distribution are defined by your shard keys. Even when choosing the correct shards key, ongoing maintenance and review can still be required to maintain optimal performance.
This presentation will review shard key selection and how the distribution of chunks can create scenarios where you may need to manually move, split, or merge chunks in your sharded cluster. Scenarios requiring these actions can exist with both optimal and sub-optimal shard keys. Example use cases will provide tips on selection of shard key, detecting an issue, reasons why you may encounter these scenarios, and specific steps you can take to rectify the issue.
Ryan will demonstrate how FoundationDB can be applied to solve real business problems today and how to map common infrastructure components like logs, tables, and indexes into a cohesive system within FoundationDB. His example applies these techniques to a problem ClickFunnels was facing in mid-2018, which required scanning millions of end-user data points for each of their tens of thousands of customers multiple times per hour. Through custom bitmap indexes built on top of FoundationDB, queries which simply wouldn't finish now take milliseconds, which enables new use cases never thought possible.
We will discuss the state of Percona Server for MySQL 8.0, now GA, and current developments around it.
This 25-minute talk aims at introducing basic concepts of profiling Java code via the lightweight-java-profiler. We will see how to collect stack traces, and will then use Flame Graphs to have a dynamic visual display for them. This will allow us to see if our code has room for improvement (and where), in an easy and scalable way. It doesn't matter if you are coding a one-hundred or a one-hundred thousand-line application, the Flame Graph visualizer will make it easy to spot CPU hogs!
Indexes are a basic feature of relational databases, and PostgreSQL offers a rich collection of options to developers and designers. To take advantage of these fully, users need to understand the basic concept of indexes, to be able to compare the different index types and how they apply to different application scenarios. Only then can you make an informed decision about your database index strategy and design.
One thing is for sure: not all indexes are appropriate for all circumstances, and using a â€˜wrong' index can have the opposite effect to that you intend, and problems might only surface once in production. Armed with more advanced knowledge, you can avoid this worst case scenario!
We'll take a look on how to use pg_stat_statment to find opportunities for adding indexes to your database. We'll take a look at when to add an index, and when adding an index is unlikely to result in a good solution.
So should you add an index to every column? Come and discover why this strategy is rarely recommended as we take a deep dive into PostgreSQL indexing.
Infrastructure automation is not easy, especially for stateful services like MySQL (or any other database for that matter). It goes way beyond the capabilities of Ansible, Chef, SaltStack or other similar tools. In this session I'm going to show you how we went from fully manual operations to a self-healing system in less than a year at Salesforce. Having done this at several companies already, I've seen the common mistakes that can break your system and make your well intended scheduler/scripts/orchestrator a ticking bomb. I will share how to avoid these problems and build a robust and scalable automation framework that's been battle tested at companies such as Booking.com and Dropbox.
We will cover:
* Tool comparison
* Centralised vs decentralised system
* Concurrency handling
* Best practices and anti-patterns
In this day and age, maintaining privacy throughout our electronic communications is absolutely necessary. Creating user accounts, and not exposing your MongoDB environment to the wider internet, are basic concepts that have been missed in the past. Once that has been addressed, individuals and organizations interested in becoming PCI compliant must turn to securing their data through encryption. With MongoDB, we have two options for encryption: at rest (only available as an enterprise feature with MongoDB) and transport encryption.
In this session we will review
- MongoDB default security
- Additional layers of security
- Audit and Log reduction
- Encryption and why it's important
- Step by step for encryption at rest and in transit
- Percona for MongoDB security features
We all use and love relational databases... until we use them for purposes for which they are not a good fit: queues, caches, catalogs, unstructured data, counters, and many other use cases could be solved with relational databases, but are better solved with other alternatives.
In this talk, we'll review the goals, pros and cons, and good and bad use cases of these alternative paradigms by looking at some modern open source implementations.
By the end of this talk, the audience will have learned the basics of three database paradigms (document, key-value, and columnar store) and will know when it's appropriate to opt for one of these or when to favor relational databases and avoid falling into buzzword temptations.
A discussion of different types of encryption as it relates to MySQL and the community, followed by a deep dive into key management with Hashicorp's Vault software and MySQL.
Real world examples, problems, and "what worked" for Empowered Benefits as they embarked on their journey to implement encryption at rest in their health care centric IT environment.
I will present a subset of the most notable ClickHouse features over the last half of year:
- data skipping indices, including full text indices (with performance evaluation and insights on implementation);
- custom compression codecs for time series data;
- HDFS and Parquet integration;
- fuzzy string search (it is really fast fuzzy string search); multiple substring matching;
- sampling profiler on the query level;
- z-curve indexing;
- table and columns TTL;
In this presentation, we'll cover the following areas:
- MySQL architecture and application eco system in Venmo
- Scalability challenges of MySQL for Venmo applications for Super Bowl peak traffic
- Short term scalability improvements for peak traffic, including horizontal and vertical scalability approaches.
- Long term directions to scale MySQL databases, including domain isolation, data sharding, and adapting MySQL database to support micro service applications.
- Case studies of MySQL performance tuning. Examples would include modifying application logic to eliminate database queries and working around optimizer bugs to handle multiple-table joins with order by limit clauses.
This presentation will go through the simple process of accessing data from a Java application. What actually happens when we use a simple direct connection, and what instead happen using an ORM/Persistent layer like hibernate. How this apparently makes programmers life easier... and the DBA's days more difficult.
PostgreSQL is an advanced open source database that is completely community driven. Continuous development and performance improvements, while maintaining a secondary focus on Oracle compatibility, gave PostgreSQL a great market penetration. When a database server is deployed in production, we often wish to achieve several 9's of availability. Is that even possible with PostgreSQL? What is the combination of tools that you could combine and implement to achieve High Availability (HA) and automatic failover in PostgreSQL? How can we avoid data loss during such failovers? We'll address these questions and then some more in this talk.
We are going to discuss **
1. How the implementation of HA differs for each type of replication available in PostgreSQL.
2. How to combine HAproxy with etcd plus a detailed explanation of the RAFT Algorithm and HA using Patroni.
3. How to combine repmgr with keepalived to achieve HA.
4. HA solution built for PostgreSQL on kubernetes.
5. What are the tools and solutions that help you achieve automatic failover in AWS and other cloud-based environments.
6. How to avoid huge data loss during failover.
At Square, we operate thousands of database instances to power a financial network, from payments to payroll. In a word: money. "Mission-critical" isn't critical enough. Come learn how we operate MySQL and Redis with billions of dollars at stake. We'll look at everything: configuration, management, monitoring, tooling, security, high-availability, replication, etc.
Data security plays a critical role in PayPal's database infrastructures. In this presentation, we will discuss how PayPal enforces data security. The following areas will be covered:
- SSL encrypted connections between applications and database instances, as well as database to database instances
- Integration of database login with LDAP for user authentication and authorization
- Enterprise auditing for database access and metadata/object modifications
- Securing application login with custom SSL key and password management, password rotations
- Methods to avoid password exposure, such as by using MySQL connection strings
- Challenges of standardization of MySQL to Percona XtraDB in PayPal. How we handled
-- different versions of MySQL on different operating systems
-- application users with super user privileges
-- incompatibilities between MySQL commercial and Percona XtraDB Cluster
Amazon Redshift has been providing scalable, quick-to-access analytics platforms for many years, but the question remains: how do you get the data from your existing datastore into Redshift for processing?
Traditional ETL methods can't keep up with large volumes of data, and can require manual reprocessing when an error occurs. Running queries by record change date puts a load on your MySQL server and pollutes your cache.
Wouldn't it be great if you could replicate your data in real time, filter on the tables and schemas you need, all without putting any extra load on your MySQL server? Wouldn't it also be great if schema changes just flowed through from MySQL to RedShift, without intervention on your part?
Join us as we explain how you can have it all: real-time, secure replication from MySQL/MariaDB/RDS MySQL/Aurora to RedShift, with schema changes replicated and no replaying of jobs needed when errors occur.
We will cover upgrade from MySQL 5.7 to MySQL 8.0 (8.0.15), going from legacy meta data storage to transactional data dictionary. We will cover the new possibilities for automation of upgrade, and the major advances in upgrade speed and reliability, as well as new consistency checks in the MySQL upgarde checker.
MySQL 8.0 has a pluggable error log. We will talk about the traditional error logger and the JSON error logger, which empowers users with advanced filtering.
Learn how to monitor MongoDB using Percona Monitoring and Management (PMM) so that you can:
* gain greater visibility of performance and bottlenecks MongoDB
* Consolidate your MongoDB servers into the same monitoring platform you already use for MySQL and PostgreSQL
* Respond more quickly and efficiently in Severity 1 issues
We'll show how using PMM's native support for MongoDB so that you can have MongoDB integrated in only minutes!
Embedded databases, tightly integrated with application software, are a great alternative to standalone database systems for small applications. This talk will cover:
- Comparison of popular embedded databases engines (Berkeley DB, SQLite, Firebird Embedded, deprecated libmysqld embedded).
- How to design an application which is using an embedded database? When not to use it?
- What are the advantages and limitations of embedded database engines?
By the end of the session, attendees will learn the advantages, when and how to use an embedded database compared to using an external database.
We store data with an intention to use it: search, retrieve, group, sort... To do it effectively the MySQL Optimizer uses index statistics when it compiles the query execution plan. This approach works excellently unless your data distribution is not even.
Last year I worked on several support tickets where data follows the same pattern: millions of popular products fit into a couple of categories and the rest used the rest. We had a hard time to find a solution for retrieving goods fast. We offered workarounds for version 5.7. However, a new MariaDB and MySQL 8.0 feature - histograms - would work better, cleaner and faster. The idea of the talk was born.
Of course, histograms are not a panacea and do not help in all situations.
I will discuss
- how index statistics physically stored by the storage engine
- which data exchanged with the Optimizer
- why it is not enough to make correct index choice
- when histograms can help and when they cannot
- differences between MySQL and MariaDB histograms
MongoBD is one of the most popular NoSQL database: how can we get started?
This talk will explore Document Oriented Modeling, Spring Data MongoDB framework features and lessons learned using this stack in production.
Considering a move to PostgreSQL? Here's a chance to learn from all the fun adventures we had moving one of our services from RethinkDB to PostgreSQL.
From materialized view refreshes, to system view queries, to torn page analysis, there's never a dull moment during that first migration!
In this session we'll discuss how we use Ansible to manage the internal MySQL services at DigitalOcean, where this has worked well for us, as well as some of the issues we've experienced along the way.
We'll dive into how we use Ansible to manage MySQL, ProxySQL & Orchestrator and other related technologies in our environment, and will discuss topics such as static vs dynamic config management, Ansible performance tuning & anti-patterns, as well as testing strategies.
At the end of this session, you should have a good understanding of how Ansible could help in your environment, as well as the potential limitations you may need to think about.
Security is always a challenge when it comes to data, but regulations like GDPR brings a new layer on top. With rules come more and more restrictions to access and manipulate data. Join us in this presentation to check security best practices, and traditional and new features available for MySQL, including features coming with the new MySQL 8.
In this talk, DBA's and sysadmins will walk through the security features available on the OS and MySQL:
- SO security
- Audit Plugin
- MySQL 8 features
- New caching_sha2_password
- Password Management
- FIPS mode
We will share our experience of working with thousands of support customers, help the audience to become familiar with all the security concepts and methods, and give you the necessary knowledge to apply to your environment.
How fast can you failover your databases? Do you trust it? Do you trust the process enough to let [almost] anyone do it, at any time? We do!
At Square, we manage thousands of MySQL and Redis database clusters. We recently rewrote all of our automation which fails over MySQL databases - making it even faster and more reliable. We brought the time from the user requesting the action, to database writes going to the new target - to generally under 2 seconds, with no real downtime or risk. This rewrite went so well for MySQL, that we decided to further abstract the process and apply the exact same set of tools to our Redis.
This talk describes the prerequisites, process, tooling, and lessons learned in safely cutting over database traffic and abstracting the process to apply to both MySQL and Redis.
During this talk, I explain how Group Replication replication works.
This is a theoretical talk in which I explain in details what replication is and how it works.
I discuss what certification is and how it's done. What is XCOM and GCS?
Is Group Replication synchronous? What are the four new consistency guarantees and how do they work?
What is the benefit of Single Primary Mode?
What are the caveats of such replication?
Why is Paxos Mencius more efficient than Totem? Is it always?
IPV4 and what about IPV6
After this presentation, the audience should be comfortable with the technical terms and understand how it works.
Don't miss this talk, the magician will reveal his tricks!
In MongoDB 4.0 the MongoDB community got its first chance to use transactions. Well, except for the short lived TokuMX clone. In this talk I will discuss the capabilities and limitations of the new feature including:
â€¢ Potential impact on concurrency
This is a feature I have long waited for and I can only view as a good thing, extending the uses of MongoDB into realms previously not possible. At the same time, as with any database transactional feature, one has to be careful not to allow the feature to seriously impact concurrency.
Columnar stores like ClickHouse enable users to pull insights from big data in seconds, but only if you set things up correctly. This talk will walk through how to implement a data warehouse that contains 1.3 billion rows using the famous NY Yellow Cab ride data. We'll start with basic data implementation including clustering and table definitions, then show how to load efficiently. Next we'll discuss important features like dictionaries and materialized views, and how they improve query efficiency. We'll end by demonstrating typical queries to illustrate the kind of inferences you can draw rapidly from a well-designed data warehouse. It should be enough to get you started--the next billion rows is up to you!
- General concepts
- what happens when keyring plugin is loaded
- where keys are stored
â€“ keyring initialization failures
â€“ taking care of core dumps
- how to setup keyring_vault (separation of servers on Vault server)
- list of keys on a server (base64 encoded)
2) How Innodb Master Key encryption internal works:
- where encryption key is stored
- relation between Master Key and tablespace's encryption key
- keyring cooperation
- keyring uninstallation
3) how key rotation works
4) can table be re-encrypted?
5) Encryption threads
- what are encryption threads
- key rotation
6) binlog encryption:
- communication between master and slave
- key rotation
- MySQL / PS encryption (8.0.14)
It's easy for Java developers (and users of other OO languages) to mix object-oriented thinking and imperative thinking. But when it comes to writing SQL the nightmare begins! Firstly because SQL is a declarative language and it has nothing to do with either OO or imperative thinking and as for one point it makes it relatively easy to express a condition in SQL it is not so easy to express it optimally and even worse to translate it to the OO paradigm. For another point, they need to think in terms of set and relational algebra, even if unconsciously!
In this talk, we'll see the most common mistakes that OO, and in special Java, developers make when writing SQL code and how we can avoid them.
Ken will share insights on the Future of Postgres - what's new, what's next, and what we are aspiring toward. He will cover:
* Roadmap for PostgreSQL 12 and beyond
* Community updates
* New ways Postgres is being used in the market
Our world is moving fast, and data is piling up. There was a time when DBA managed a few machines, then a few hundred, and now each DBA needs to handle thousands. To enable this, we are going to talk about how we used to monitor, then how SRE has changed the rules. We will also cover why your DBA is still needed even in SRE, but the function is different. Finally, we will break into migrating and enabling SRE functions with Machine Learning or AIOps. This will include a discussion on what AIOps for DB's is not, and how you can get there.
* Classic Database Operations and Deployment
* SRE Concepts for the DBA
* Automation DBA tasks and investigations
* Monitoring of Yesterday
* Automating Alert Handling
* Machine Learning and how it helps SRE
* What not to expect from ML and AIOps
* Talk about Monasca as a good architecture template
As we move into the future, data becomes worth stealing and is stolen often. Backups become critical not only to be safe in case of an accident, but safe from people and computers who should not have your data. Security Compliance is here (and there will be more in the future) and both PCI and GDPR have requirements for backups. Plus our data may be what thieves really want so encryption, storage and retention policies must be looked at or used.
Come share your experience and learn from others on how to make databases secure and compliant.
Multi-source replication is an awesome way to aggregate data for BI/Analytics,
But how to get all those 42 database instances (Total 8T) into one instance when keeping weeks of binlog is not an option ?
Comparing performance, ease of use, and load on source system for mysqldump and Xtrabackup/TT.
Special cases :
* 5.6 and Partitioned tables.
* Tables in mixed formats (Partitions created over time)
Being hosted in the cloud is not just about having someone to take care of your database servers for you. It's also about how to improve, contribute, and implement best practices of MySQL to better serve customers. Did you know that we changed how disk block sizes work to improve InnoDB general performance? How about how enforcing GTID yields a significantly more resilient replication? Come join me on a journey about how we thoughtfully orchestrated our hardware, Container-Optimized OS (the Kubernetes Operating System), and MySQL itself to deliver world class performance and reliability in our product.
Doug will discuss the basic knowledge needed to understand the complications of running MongoDB inside of a containerized environment and then to go over the specifics of how Percona solved these challenges in the PSMDB Operator. It also will provide an overview of PSMDB Operator features, and a sneak peek at future plans.
At ITSumma, we provide 24/7 site reliability engineering for more than 300 clients with 10,000+ servers in total, collecting over 200 thousand metrics per second.
In 2010, we realized that existing monitoring systems could not handle our requirements. What we needed was the capability to instantly process and display analytics, store a minimum of 1 year's worth of data in 15-second, (better yet 1-second) intervals, and make quick-fire (as little as 200-millisecond) queries to retrieve high-resolution data snapshots.
That's why we developed our own monitoring system, and it worked well with the infrastructure of that time. In 2018, our system could no longer meet the requirements of new infrastructures, and had outlived its usefulness in some ways.
Since late 2018, we have been developing a new monitoring system.
To assist us with this project, we compared several major solutions for storing time-series data, including Prometheus storage, InfluxDB, Cassandra, Clickhouse and others.
We investigated their capabilities with our production data in terms of performance, stability, scalability, and storage usage.
At Percona Live I would like to present our findings and show the results of our production and performance tests which we consider useful for anyone interested in storing massive amounts of time series data.
Are you struggling with too much data? Are your MySQL databases able to ingest the influx of new data? Are you considering an alternate database but modifying your application would be challenging? Are you able to query efficiently your data once ingested?
All those questions are raised more often as the trend of connected devices fully deploys. This talk focuses on large metrics-oriented datasets but the ideas presented apply to a wider audience. From experiences gathered as principal architect at Percona helping customers, I'll cover all the data life cycle phases, from the initial ingestion, the staging, compression, analysis, aggregation, and the final archival.
Real world examples and solutions will be given to quickly improve performances and lower costs.
Usually, Java applications use databases as storage. With an already working app, it's not necessarily so easy to support and maintain. We'll review some cases where you have a MySQL-powered Java application, and how you can easily set up monitoring for it using PMM. You'll see information about Java, system, and database in one system, and will be able to analyze it,
We'll see the latest version of Percona Monitoring and Management system (PMM) and combine it with JMX exporter to get data from Java metrics. All of these will be collected in Prometheus. We'll review dashboards and see how to make your own dashboards.
Learn how to monitor PostgreSQL using Percona Monitoring and Management (PMM) so that you can:
* gain greater visibility of performance and bottlenecks PostgreSQL
* Consolidate your PostgreSQL servers into the same monitoring platform you already use for MySQL and MongoDB
* Respond more quickly and efficiently in Severity 1 issues
We'll show how using PMM's native support for PostgreSQL that you can have PostgreSQL integrated in only minutes!
Based on our experience deploying PMM on hundreds of different environments, in this session I'm going to show how to manage PMM on production using Ansible for automation.
Some of the automation tasks includes:
- PMM Server Deployment
- PMM Client for multiple server types (MySQL, Proxysql, ...)
- PMM managed for RDS
- exporters monitoring
Deploying SSL/TLS with MySQL at Booking.com on thousands of servers is not without issues.
In this session I'll tell you what steps we took, what problems we hit, and how we improved various parts of the MySQL ecosystem while doing so.
To start we go over the basics: Which TLS settings are there in MySQL and MariaDB and how does this differ from HTTPS as used in browsers. And why do we want TLS in the first place? Is TLS and SSL the same thing?
The first set of problems is inside MySQL: YaSSL vs. OpenSSL, verification issues and reloading of certificates.
The second set of problems is inside Connectors: I'll touch on DBD::mysql (Perl), Go-MySQL-Driver, libmysqlclient (C)
Not all connectors have the same options and defaults. I'll go into TLSv1.2 support.
The third set of problems is tools: Using the require_secure_transport option caused issues with Percona Toolkit and Orchestrator.
I'll also cover: RSA v.s EC, security issues I found and how I wrote a Proxy for MySQL
A standard pattern in scaling relational data access is to use replicas for reads, but this compromises read-after-write consistency. Unfortunately, since many use-cases require stronger consistency guarantees, the pattern is of limited utility. At Box much of our relational data access requires strong consistency for the purposes of enforcing enterprise permissions, but we need to scale too! To address this challenge, we have introduced a novel design pattern and a framework implementing it. This framework, currently used at scale to serve production traffic at Box, utilizes real-time replication position analysis to safely direct traffic that requires strong read-after-write consistency to read-only replicas.
Slow queries are the harsh reality of any database. Irrespective of how you build the system, design data, educate developers and control access, it's very hard to prevent slow queries. They negatively affect the performance of a database and thus of any application using it. It's crucial to monitor them. Otherwise, you will find yourself finding them while troubleshooting a production database performance issue.
At ThousandEyes, we have taken a proactive and automated approach to monitoring slow queries in our MySQL fleet. We have built a completely automated pipeline using various open source technologies percona-toolkit, Anemometer and an in-house tool - Slow Query Notifier - to catch these slow queries and notify us. Slow Query Notifier identifies top offenders and poorly designed queries, focusing on the most impactful culprits. It notifies authors of the respective query via our JIRA ticketing system, and is capable of managing a complex JIRA workflow - creating, reopening issues, and updating priorities.
In this presentation, we go over the importance of proactively monitoring slow queries, and share our design and learnings. We share our goals to open source slow-query-notifier, integrate with PMM query analytics, and add support for Mongodb slow queries.
Introduction, installation, configuration, usage examples.
How to backup and restore to a local file system.
How to backup and restore to an S3 instance.
Kirill & Kostja will present an overview of Tarantool, an open source in-memory DBMS. They will explain why it is cool to have a DBMS in the same address space as your application server, why Tarantool is in fact single threaded and why the in-memory DBMS now features new disk-oriented engine.
Finally, they'll explain, why a no-SQL DBMS now supports SQL.
Immutable Database Infrastructure with Percona XtraDB Cluster
In this session, we will discuss how we build Imuutable infrastructure for database.
Yahoo! JAPAN is a most largest web portal in Japan. we have aboult 90 million unique browser access per month.
We have focused on efficient management of many DBs.
If we need change something, We will abandon the database server entirely, and reinstall it from the OS.
In other words, "Immutable Infrastructure" means "Disposable".
This way keep database server clean and fresh always.
In addition, database does not stop in this cycle.
Percona XtraDB Cluster has important functions to achieve this mechanism.
The abstract of the session includes :
- What is "Immutable Infrastructure" ?
- Architecture over view
- Why Percona XtraDB Cluster ?