Detecting faults in a system is an age-old problem with a large body of literature and practice. Complete failures are usually not hard to detect, but a misbehaving component is often less straightforward. If the system's throughput dips, is it a fault in the system? Or is there decreased demand because another layer in the application stack is failing?
Amazon Web services is a popular place to run mysql. Come hear tips and tricks to run optimally in the cloud. Should you use RDS or EC2? How can you achieve reliable IO? What monitoring tools can you use?
In this keynote session, the speaker will introduce the world’s first Ruby on Rails benchmark to analyze performance and scalability of Rails implementations on MySQL and MySQL-compatible databases, like Clustrix. The benchmark simulates an auction site with social media extensions to demonstrate performance across a wide variety of application workloads. Attendees will learn how the choice and configuration of the database provides the key to breaking through the scalability ceiling with Rails implementations.
Managing global data sets can be tough, and making them unreasonably fast, and utterly reliable is only part of the problem. Once you’ve chosen MySQL and NDB, what techniques can you use to create data systems that respond quickly to complex queries? This talk will focus on best practices for making your data smart as well as fast.
How do you find one users’ data when it is located in one of a hundred schemas? Build a global index table that allows you to identify the home data center and schema for any user. Sharding data across many schemas is the lifeblood of highly-scalable websites. Multiple data centers are the key to maintaining availability and performance around the globe. You can use Tungsten Replicator to combine key data from all of these schemas into a central table in real-time. This talk will show examples of doing this in MySQL or MongoDB.
At Clustrix, we're frequently asked by our customers to help them optimize their query workload for our database. To help provide the best customer support possible, we've built the Clustrix DevOps Control Center to profile live database traffic, as well as track how the workloads change historically.
This breakout session will take an in depth look at the DevOps Control Center and how you can use it to gain deep insight into the application workload and quickly identify improvements to the most critical queries.
The current flash/solid state market is crowded and sometimes confusing. This talk surveys the current landscape and significant benefits of deploying Flash Memory compared to traditional hard disk drives (HDD’s) in MySQL environments. It will focus on a few key areas including why traditional HDD solutions fall short and why, when retrofitting these legacy systems with SSD’s, they still remain too costly to deploy on a large scale for all but a select few workloads.
Replication is one of MySQL's most widely-used features, and despite significant improvements over the years, it can still fail. When this happens, it can be due to any number of factors, either internal or external to your MySQL database. For those who do not have a thorough understanding of the MySQL binary log format or the slave's two-phase replication architecture, safely restoring replication while maintaining data integrity can be a daunting task.
Timeline isn’t just a bold new look for Facebook—it’s also the product of a remarkably ambitious engineering effort. While our earlier profile pages displayed a few days or weeks of activity, from the onset we knew that with Timeline we had to think in terms of years and even decades. At a high level we needed to scan, aggregate, and rank posts, shares, photos, and check-ins to surface the most significant events over years of Facebook activity. The end result was a flexible yet straightforward solution built on MySQL.
The pundits said MySQL was on the ropes after the Oracle acquisition in 2009. Far from it! Thanks to a burst of innovation and three popular builds, MySQL is an even safer bet to build 24x7 transaction processing systems. But how do you handle requirements to spread data across regions or add real-time analytics? How do you scale up to Big Data on cloud platforms? At Continuent we believe the answer is to embed off-the-shelf MySQL in "data fabrics" consisting of modular data services tied together by advanced replication.
Galera is fundamentally new replication technology opening revolutionary possibilities for building application high availability stacks. This presentation will show how Galera can be used in various HA use cases, like synchronous master-slave or multi-master replication. And laying out deployment alternatives from running cluster in high speed LAN networks using UDP multicast, replicating over WAN network or in cloud environment.
21st Century mythology is full of stories of simple technology ideas that with free software and cheap connectivity can be brought to life in a weekend with very little cash.
As an experiment I set out to build a PHP and MySQL based social web application spending as little time and money as possible. I needed
to include the types of features that make up the modern web, Facebook and Twitter integration, game like mechanics, crowd sourced content and funny pictures of cats.
Huge, rapidly-growing MySQL architectures can be quite challenging to maintain. Sharding helps tremendously with horizontal scalability, but introduces a great deal of operational complexity. In order to stay sane while juggling hundreds of database servers and tens of billions of rows, sophisticated automation becomes a necessity.
To solve this problem at Tumblr, we created Jetpants, a multi-purpose toolchain for managing giant MySQL topologies. In this session, we will outline how we...
In two years the MariaDB project has had four major releases with a tonne of major features. Why should you care about it? This is not a talk about the community around MariaDB, but a feature-by-feature blowout as to why you should consider this database which is current to MySQL 5.5.
* How does MariaDB execute queries faster? Materialize subqueries that are non-corelated? Find out about our immense changes in the optimizer, as we have an overview of the many changes available in MariaDB 5.3 and 5.5.
MySQL runs faster and flash media endurance increases by replacing MySQL double-buffered writes with native atomic writes. Learn how MySQL is leveraging new flash storage API primitives for code simplification and latency reduction. Session will include a code sample overview and benchmark results, as well as a logical description of emerging flash storage API primitives.
I this talk I will cover Solid State Drives internals and how they affect database performance.
IO level benchmarks for SATA (Intel 320 SSD) and PCI-e (FusionIO, Virident) cards
to show absolute performance and give an idea on performance per $.
And finally how you can use MySQL and Percona Server with SSD,
what tuning parameters are most important and what performance may expect in real
MySQL has no single unbreakable backup solution. There are several choices from included or optional open source dump utilities, and commercial offerings from Oracle and third parties. Are you using the right approach for your data availability and recoverability needs? The choice of the storage engine, and underlying way of storing and retrieving data is a foreign concept for the Oracle DBA. These have different locking strategies and automatic crash recovery concerns.
Proper indexing is a key ingredient of database performance and MySQL is no exception. In this session we will talk about how MySQL uses indexes for query execution, how to come up with an optimal index strategy, how to decide when you need to add an index, and how to discover indexes which are not needed.